Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Atomic move operation for element reparenting & reordering #1255

Open
domfarolino opened this issue Feb 14, 2024 · 103 comments
Open

Atomic move operation for element reparenting & reordering #1255

domfarolino opened this issue Feb 14, 2024 · 103 comments
Assignees
Labels
a11y-tracker Group bringing to attention of a11y, or tracked by the a11y Group but not needing response. addition/proposal New features or enhancements needs implementer interest Moving the issue forward requires implementers to express interest stage: 2 Iteration

Comments

@domfarolino
Copy link
Member

domfarolino commented Feb 14, 2024

What problem are you trying to solve?

Chrome (@domfarolino, @noamr, @mfreed7) is interested in pursuing the addition of an atomic move primitive in the DOM Standard. This would allow an element to be re-parented or re-ordered without today's side effects of first being removed and then inserted.

Here are all of the prior issues/PRs I could find related to this problem space:

Problem

Without an atomic move operation, re-parenting or re-ordering elements involves first removing them and then re-inserting them. With the DOM Standard's current removal/insertion model, this resets lots of state on various elements, including iframe document state, selection/focus on <input>s, and more. See @josepharhar's reparenting demo for a more exhaustive list of state that gets reset.

This causes lots of developer pain, as recently voiced on X by frameworks like HTMX, and other companies such as Wix, Microsoft, and internally at Google.

This state-resetting is in part caused by the DOM Standard's current insertion & removal model. While well-defined, its model of insertion and removal steps has two issues, both captured by #808:

  1. Undesirable model: The current DOM Standard allows for the non-atomic insertion of multiple nodes at a time. In practice, this means when appending e.g., a DocumentFragment, script can run in between each individual child insertion, thus observing DOM state before the entire fragment insertion is complete.
  2. Interop issues: While Safari matches the spec, Chromium & Gecko have a model that ensures all DOM mutations are synchronously performed before any script runs as a result of the mutations.

What solutions exist today?

One very limited partial solution that does not actually involve any DOM tree manipulation, is this shadow DOM example that @emilio had posted a while back: whatwg/html#5484 (comment) (see my brief recreation of it below).

Screen Recording 2024-01-29 at 5 00 26 PM

But as mentioned, this does not seem to perform any real DOM mutations; rather, the slot mutation seems to just visually compose the element in the right place. Throughout this example, the iframe's actual parent does not change.


Otherwise, we know there is some historical precedent for trying to solve this problem with WebKit's since-rolled-back "magic iframes". See whatwg/html#5484 (comment) and https://bugs.webkit.org/show_bug.cgi?id=13574#c12. We believe that the concerns from that old approach can be ameliorated by:

How would you solve it?

Solution

To lay the groundwork for an atomic move primitive in the DOM Standard, we plan on resolving #808 by introducing a model desired by @annevk, @domfarolino, @noamr, and @mfreed7, that resembles Gecko & Chromium's model of handling all script-executing insertion/removal side-effects after all DOM mutations are done, for any given insertion.

With this in place, we believe it will be much easier to separate out the cases where we can simply skip the invocation of insertion/removal side-effects for nodes that are atomically moved in the DOM. This will make us, and implementers, confident that there won't be any way to observe an inconsistent DOM state while atomically moving an element, or experience other nasty unknown side-effects.

The API shape for this new primitive is an open question. Below are a few ideas:

  • A new DOM API like replaceChildAtomic()/replaceChildrenAtomic() that can take a connected node and atomically re-parent it without removal/insertion side-effects.
    • One limitation here is that we'd have to pick and choose which existing DOM APIs we want to mirror with atomic counterparts. For example, if we ever wanted append() or appendChild() to ever be able to also atomically move already-connected nodes, we'd have to introduce appendAtomic() and appendChildAtomic(), and so on.
  • A setting for existing DOM APIs, e.g., append(node, {atomic: true}), replaceChild(node, {atomic: true})
  • A scoped, declarative attribute that changes the behavior of DOM mutation APIs in a subtree
    • This could be an element attribute that makes all existing DOM mutation APIs behave "atomically" when operating on already-connected nodes under the element's subtree
    • This could also be a property on the document overall, set via a header/meta tag, or some other mechanism

Compatibility issues here take the form relying on insertion/removal side-effects which no longer happen during an atomic move. They vary depending on the shape of our final design.

  1. With a new DOM API/setting that developers have to affirmatively opt-into, you could atomically move fragments/subtrees constructed by other library code that's unaware it's being atomically moved. Those fragments may be built in a way that relies on non-atomic move side-effects (though we haven't heard of such concerns directly yet).
  2. Consider an element attribute that changes the behavior of all DOM mutation APIs to behave atomically on already-connected nodes in its subtree. You could minimize compat concerns by externally-constructed portions of the subtree to opt-out of atomic moves with the same attribute. But what would that mean exactly, to have part of a subtree move atomically and part of it not?

A non-exhaustive list of additional complexities that would be nice to track/discuss before a formal design:

  • How to handle mutation events? There was discussion at the TPAC 2023 about suppressing mutation events when new-ish DOM features are used, so we could probably get away with simply suppressing mutation events whenever an atomic move is being performed??
  • Handling things like focus/selection properly (need to land on desired behavior)
  • Fixing up things like live ranges; the way DOM handles this today might already be suitable for atomic moves, but unclear

Anything else?

No response

@domfarolino domfarolino added needs implementer interest Moving the issue forward requires implementers to express interest addition/proposal New features or enhancements labels Feb 14, 2024
@domfarolino domfarolino self-assigned this Feb 14, 2024
@domfarolino domfarolino added the agenda+ To be discussed at a triage meeting label Feb 14, 2024
@WebReflection
Copy link

WebReflection commented Feb 14, 2024

First of all, thank you! I've been vocal about this issue about forever and part of one of the biggest discussions you've linked.

As author of various "reactive" libraries and somehow veteran of the "DOM diffing field", I'd like to add an idea:

The API shape for this new primitive is an open question. Below are a few ideas:

I understand a node can be moved from <main> to an <aside> element and this proposal should still work but I think we should not discard the Range API:

  • most modern libraries have a concept of fragments, inevitably represented as virtual because there's no persistent fragment whatsoever yet on the DOM (I've been vocal about this too)
  • in a classic table sort mechanism there could be only few TRs moved within a specific place and taht's the same for LIs and others ... if any proposed API consider only parentNode to work that would not satisfy most fragment based requirements where areas are confined within Virtual DOM or comment nodes to confine those special cases while the Range api could instead simply select a node start, a node end, and update atomically inner nodes

On top of this I hope whatever solution comes to mind works well with DOM diffing, so that new nodes can even pass through the usual DOM dance when the parent is changed or they become live, removed nodes that won't land anywhere else would eventually invoke disconnectedCallback if Custom Elements, but nodes already present in that container and moved around basically do nothing in terms of state, they are just shuffled in the layout, if they do.

As quick idea to eventually signal a node is going to be moved in an atomic way, and assuming it's targeting also a live parent, I think something like parent.insertBeforeAtomic(node[, reference]) could be an interesting approach to consider as that basically solves everything, from append to prepend to any other case insertBefore works wonderfully well and it hints that such node should:

  • do nothing if the parent is the same as before (or the node was already live) ... just move it and skip all the things
  • trigger connectedCallback if the node was not live
  • ... that's it?

As insertBefore covers append, appendChild, prepend, before and after with ease, it might be the easiest starting point to have something working and useful for the variety of virtual fragments based solutions and diffing APIs out there.

I hope this answer of mine makes sense and maybe trigger some even better idea / API.

edit on after thoughts another companion of the API should be reflected in MutationObserver, or better, MutationRecord ... so far we have addedNodes and removedNodes but nothing about movedNodes which will still be desired for most convoluted edge cases.

The movedNodes record will contain, beside of course the target, a from parent container and a to parent container which might be the same if moved internally but it would signal previous parent and new parent otherwise that something different is within their content.

@1cg
Copy link

1cg commented Feb 18, 2024

This would be a fantastic addition of functionality for web development in general and for web libraries in particular. Currently if developers want to preserve the state of a node when updating the DOM they need to be extremely careful not to remove that node from the DOM.

Morphing (https://github.com/patrick-steele-idem/morphdom) is an idea that has developed around addressing this. I have created an extension to the original morphdom algorithm called idiomorph (https://github.com/bigskysoftware/idiomorph/) and the demo for idiomorph shows how it preserves a video in a situation when morphdom cannot. 37Signals has recently integrated idiomorph into Turbo 8 & Rails (https://radanskoric.com/articles/turbo-morphing-deep-dive-idiomorph)

If you look at the details of the idiomorph demo you will see it's set up in a particular way: namely, the video cannot change the depth in the DOM at which it is placed, nor can any of the types of the parent nodes of the video change. This is a severe restriction on what sorts of UI changes idiomorph can handle. With the ability to reparent elements idiomorph could offer much better user experience, handling much more significant changes to the DOM without losing state such as video playback, input focus, etc.

Note that it's not only morphing algorithms like idiomorph that would benefit from this change: nearly any library that mutates the DOM would benefit from this ability. Even virtual DOM based libraries, when the rubber meets the road, need to update the actual DOM and move actual elements around. This change would benefit them tremendously.

Thank you for considering it!

@smaug----
Copy link
Collaborator

Anything else?

Add some complexity to selection/range: how to deal with Shadow DOM when the host moves around and selection is partially in shadow DOM?

@ydogandjiev
Copy link

This is a very exciting proposal! In the Microsoft Teams Platform, we extensively use iframes to host embedded apps in the Teams Web/Desktop Clients. When a user navigates away from an experience powered by one of these embedded apps and comes back to it later, we provide the ability for them to keep their iframe cached in the DOM (in a hidden state) and then re-show it later when it's needed again. To implement this functionality, we had to resort to creating the embedded app frames under the body of our page and absolute position them in the right place within our UX. This approach has lots of obvious disadvantages (e.g. breaks the accessibility tree, requires us to run a bounds synchronization loop, etc.) and the only reason we had to resort to it was because moving the iframe in the DOM would reload the embedded app from scratch thus negating any benefits of caching the frame. This proposal would allow us to implement a much more ideal iframe caching solution!

Note the location of the iframe in the DOM and its absolute positioning in this recording:
https://github.com/whatwg/dom/assets/3357245/7fd4d2a7-2c2d-4bed-9a78-9c60f26a42f4

@infogulch
Copy link

The WHATNOT meetings that occurred after this issue was created deferred discussion about the topic. I wonder what next steps would be needed to move this issue forward. The next meeting is on March 28 (#10215).

@noamr
Copy link
Collaborator

noamr commented Mar 22, 2024

The WHATNOT meetings that occurred after this issue was created deferred discussion about the topic. I wonder what next steps would be needed to move this issue forward. The next meeting is on March 28 (#10215).

I hope we can get to it in the 28.3 WHATNOT. @domfarolino @past ?

@past
Copy link

past commented Mar 22, 2024

It's already on the agenda, so if the interested parties are attending we will discuss this.

@iteriani
Copy link

Are the imperative and declarative APIs meant to slowly replace the existing APIs over time? Or do we need to choose between one or the other because of potential overhead?

@noamr
Copy link
Collaborator

noamr commented Mar 26, 2024

Are the imperative and declarative APIs meant to slowly replace the existing APIs over time? Or do we need to choose between one or the other because of potential overhead?

If I understand the question, it's mainly for backwards compatibility. In some cases you might want the existing behavior or something subtle in your app relies on it, so we can't just change it under the hood.

@sebmarkbage
Copy link

This would be very nice for React since we currently basically just live with things sometimes incorrectly resetting. A couple of notes on the API options:

  • Associating with the node that gets moved e.g. an option on the <iframe> doesn't make much sense because it can be deeply nested inside the tree that moves. The iframe doesn't know anything about which context it moves inside. At best maybe you'd just have to by default add it to all possible nodes that might contain any state - which is all nodes.
  • Associating with a subtree creates a kind of "mode". Basically for a React app we'd just add it to the entire document, but that also affects any subtrees embedded inside the document which might be an entire legacy app or a different framework. It forces us to basically break the whole app to opt into it. It'd basically be like a new doctype kind of mode.

The thing that does causes a change is the place where the move happens. But even then it's kind of random which one gets moved and which one implicitly moves by everything around it moving. We don't remove all children and then reinsert them. So sometimes things preserve state.

A new API for insertion/move seems like a better option.

We'd basically like to just always the same API for all moves - which can be thousands at a time. This means that this API would have to be really fast - similar to insertBefore. An API like append(node, {atomic: true}) doesn't seem good because the allocation and creation of potentially new objects and reading back the value from C++ to JS isn't exactly fast. Since this is a high performance API, this seems like a bad option.

Something new like replaceChildAtomic would be easy to adopt inside a library and faster.

@rniwa
Copy link
Collaborator

rniwa commented Mar 26, 2024

One thing that's nice to nail down is whether re-ordering of child nodes is enough or we need to support re-parenting (i.e. parent node changing from one node to another). Supporting the latter is a lot more challenging than just supporting re-ordering.

@1cg
Copy link

1cg commented Mar 26, 2024

Definitely would prefer full re-parenting. I gave an htmx demo of an morph-based swap at Github where you could flip back and forth between two pages and a video keeps working:

https://www.youtube.com/watch?v=Gj6Bez2182k&t=2100s

The dark secret of that demo was that I had to really carefully structure the HTML in the first and second pages to make sure that the video stayed at the same depth w/ the same parent element types to make the video playing keep working. Would be far better for HTML authors if they could change the HTML structure entirely, just build page 1 the way they want and build page 2 the way they want, and we could swap elements into their new spots by ID.

@domfarolino
Copy link
Member Author

(For the purpose of brevity, I will begin using the SPAM acronym that we've been toying around with internally, which means "state-preserving atomic move". The most obvious example is an iframe that gets SPAM-moved doesn't lose its document or otherwise get torn down).


  • Associating with a subtree [...] Basically for a React app we'd just add it to the entire document, but that also affects any subtrees embedded inside the document [...]. It forces us to basically break the whole app to opt into it.

The thing that does causes a change is the place where the move happens.
[...]
A new API for insertion/move seems like a better option.

@sebmarkbage I understand your hesitation around a new subtree-associated-HTML-attribute — in that it would be over-broad, affecting tons of nested content that a framework might not own, possibly breaking parts of an app that doesn't expect SPAM moves to happen. But I'm curious if a new DOM API really gets you out from under that over-broadness, while still being useful? What would you expect orderedList.replaceChildAtomic(newListItem, oldListItem) to do, where newListItem is an <li> with a bunch of app-specific (not framework-owned) child content, including <iframe>s?

I guess I had in mind that the imperative API would force-SPAM-move the "state-preservable" elements in the subtree that's moving, so that any nested iframes do not get their documents reset1. But if that API would not preserve nested iframe state, then the only way it would be possible to actually preserve that iframe's state in this case is if the application took care to apply an iframe-specific HTML attribute to it, specifying that it opts into SPAM moves:

  • Associating with the node that gets moved e.g. an option on the <iframe> doesn't make much sense because it can be deeply nested inside the tree that moves. [...]

But it sounded like that option didn't sit well with you because the application author would be one-by-one sprinkling these attributes to random iframes without understanding the context in which the SPAM move might actually take place, by a framework way higher up the stack.

So how can we best enable the scenario where an <li> that contains a deeply-nested iframe, gets SPAM-moved without the iframe being reset? My thought is that:

  • list.replaceChildAtomic(new, old) would force-SPAM-move iframes in the new subtree (if new is already connected in the DOM of course)
  • Good ole fashioned list.replaceChild(new, old) would only cause SPAM moves to happen on elements in the subtree with the HTML attribute directly applied to it (i.e., <iframe preserve=content>), and no other elements.

But I would love to get more thoughts on the subtree side-effects stuff in general.

Footnotes

  1. Possibly other state like focus/selection being preserved on other eligible elements; that bit would need to be figured out!

@rniwa
Copy link
Collaborator

rniwa commented Mar 27, 2024

I don't think we can make this happen automatically based on a content attribute on an iframe. It most certainly needs to be a completely new DOM API.

@domfarolino
Copy link
Member Author

I don't think we can make this happen automatically based on a content attribute on an iframe. It most certainly needs to be a completely new DOM API.

I am very much open to that, I'm just trying to consider what subtree side-effects are acceptable. That is, if parent.appendAtomic(connectedDivWithChildIframe) should preserve the "child iframe" state or not? I think it has to, for the API to be useful at all. But I'm also sympathetic to compat concerns that it might cause a preserving-move to happen on deeply-nested iframes in a subtree built by another application/framework than the one performing the move in the first place. (And maybe that could break things if parts of the app relies on preserving moves not happening on nodes in the subtree).

@domfarolino
Copy link
Member Author

An attribute + DOM API could work together in this case a bit, to ameliorate some of the compat concerns. For example:

const nodeToAtomicallyMove = document.querySelector('......');
// Never trigger atomic moves on *this* specific sub-subtree, that was built by "old" content.
nodeToAtomicallyMove.querySelector('.built-by-legacy-app').preserve = 'none';
newParent.appendAtomic(nodeToAtomicallyMove);

In this case, all <iframe>s inside nodeToAtomicallyMove could be SPAM moved except ones that exist inside the subtree .built-by-legacy-app. Those ones are specifically opted-out, because maybe they can't handle preserving-moves... Just an idea!

@rniwa
Copy link
Collaborator

rniwa commented Mar 27, 2024

That sounds like something that could be built by a user hand library, not something that needs to be built into browser's native API. We really need to keep this API proposal as simple & succinct as much as possible.

@noamr
Copy link
Collaborator

noamr commented Mar 27, 2024

I don't think we can make this happen automatically based on a content attribute on an iframe. It most certainly needs to be a completely new DOM API.

Can you expand on why this is impossible? I can see the point why it might be preferable, but I think both directions are possible.

@noamr
Copy link
Collaborator

noamr commented Mar 27, 2024

and +1 to not limiting it to reordering. We'll end up just scratching the surface of the use-cases, coming back to where we started where we still need a full solution for reparenting.

@annevk
Copy link
Member

annevk commented Mar 27, 2024

I'm also a bit at a loss as to why we'd discuss new attributes. That seems like a pretty severe layering violation? The way I see it:

  1. https://dom.spec.whatwg.org/#mutation-algorithms needs to gain a new "move" operation that encapsulates argument validation, new mutation observer records, new callback steps for specifications to hook into, etc.
  2. We figure out what API is best suitable for that new primitive, e.g., parent.moveBefore(node, before). (Possibly multiple APIs, but best to start small and give it time to bake in multiple implementations.)

@noamr
Copy link
Collaborator

noamr commented Mar 27, 2024

I'm also a bit at a loss as to why we'd discuss new attributes. That seems like a pretty severe layering violation? The way I see it:

  1. https://dom.spec.whatwg.org/#mutation-algorithms needs to gain a new "move" operation that encapsulates argument validation, new mutation observer records, new callback steps for specifications to hook into, etc.
  2. We figure out what API is best suitable for that new primitive, e.g., parent.moveBefore(node, before). (Possibly multiple APIs, but best to start small and give it time to bake in multiple implementations.)

I tend to agree with the conclusion, but I want to explain why the main reason to consider things like an iframe attribute, in case it raises something else.

Outside "keep iframes from reloading", it's unclear exactly what the effects of this would be. For focus, we need to blur and refocus anyway, e.g. in case you're moving the element to an inert tree. We can decide to do that and just suppress the events. Similar provisions have to be taken for selection. So if we add moveBefore, we have to decide if it does all these things, if so, how exactly, or just the iframes thing for start.

@gnoff
Copy link

gnoff commented Mar 27, 2024

@domfarolino

I guess I had in mind that the imperative API would force-SPAM-move the "state-preservable" elements in the subtree that's moving

I think what Seb is saying is that React can decide if a move should be state preserving but if React added a "preserve-state" attribute to <html /> and then some embedded application deep in the DOM does an append expecting the append to be non-state-preserving we've just altered the moves that the other application owns.

Our perspective is that the mover decides the move semantics rather than the tree. So any moves done by this embedded application won't preserve state b/c that is what the application was expecting and any moves done by React would preserve state becuase React was updated to signal this intent by using a novel API

@wlib
Copy link

wlib commented Nov 23, 2024

Just to confirm my understanding:

The whole point here is to have the option to move a node without having to go through undesired disconnect/connect steps (which can discard state). Iff a node is not connected to the document that it's trying to move to, it's not possible to avoid the connect step, so an atomic move operation can't happen. To reflect this, moveBefore() throws. It will never throw for cases where state is possible to preserve. In a correctness sense, this is in the same spirit as the rest of the manipulation apis.

The problem is for library code where you would have to check every node that you're moving around to see if it was removed by someone else or something. In these cases, it's pointless to care whether it can be moved with or without (dis)connections, as you're really just declaring a desired DOM fragment. Any nodes that aren't already in the tree don't have any state to preserve anyways.

In the same sense that the newer manipulation functions have replaced insertBefore(), there needs to eventually be "soft" versions that just attempt to use atomic movement if they can. The painful part is that before()/replace()/prepend()/replaceChildren()/append()/after() already have the shortest names lol.

@wlib
Copy link

wlib commented Nov 23, 2024

The more I think about that, the more that I realize how fragile it would be wrt manipulation order. It's easy to first replace some nodes that you would later move somewhere else. A properly atomic move api absolutely needs to have transactions to avoid this. The easiest solution is to just make transactions commit in the next microtask. I'm pretty sure this is the same behavior that the current manipulation apis have, but I just want to confirm that this is how moveBefore() &c. will behave.

I think it's worth it to talk about this sooner rather than later, but I'm pretty sure that this should be in a separate issue.

@titoBouzout
Copy link

Anything using moveBefore will fallback to insertBefore no matter what. There's literally 0 use-cases for throwing.

@dead-claudia comment from here #1307 (comment)
An acceptable compromise would be adding a flag to make it throw

  • parent.moveBefore(child, refNode) fallback to insertBefore if needed
  • parent.moveBefore(child, refNode, true) throws if needed

That, or every framework will have to ship the same 5 lines of code (if that doesn't make this ridiculous slow, on which case they will restore focus/selection in other ways and skip using this api)

@WebReflection
Copy link

WebReflection commented Nov 23, 2024

Anything using moveBefore will fallback to insertBefore no matter what. There's literally 0 use-cases for throwing.

not adding much but beside fully agreeing with this sentence, the hidden footgun this API is throwing at developers is that even libraries "sure enough" to move around their own nodes can't prevent other libraries to interfere with live nodes ... so, ensureing not-live nodes, or nodes moved elsewhere, where the DOM has no mechanism to provide, or prevent, nodes ownership, looks really like somebody overlooked at the reason this API is desired in the first place: the intent is in the name, nothing else should happen ... if the intent can be clear, let it be, if it needs internals disambiguations for when that canot be performed, let that be an internal implementation detail no Web developer asked for or cared about when moveBefore was meant to be used.

Again, this API should be the new insertBefore that boosts performance to everyone using it, it's becoming a footgun try/catch trap instead nobody concretely needs, or asked for, on the Web.

@dead-claudia
Copy link

From #1255 (comment):

To be blunt, with this API as-is I think it's likely the following will happen:
[...]
3. these checks will be needed in hot paths potentially affecting performance tests on which frameworks compete
4. if there is measurable performance degradation, frameworks will not use this API

Assuming (4) occurs, which I think is probably likely, If these checks can be done faster in platform code, they should be done there.

This isn't entirely hypothetical, but impact would be mostly limited to large structural tree updates, where most the time is really spent recomputing layout and updating paint.

In creating MithrilJS/mithril.js#2982, flattening a nested try/catch in the attribute update flow saved about 10% in performance, and their mere addition to that section caused a roughly 20% perf drop, but only in the fast case of no attributes changed (where diffs are commonly few-millisecond). In the slow case where attributes were frequently changing, it was barely outside margin of error, but paint times would far exceed that anyways.

This is of course in the attribute update flow, and virtual DOM frameworks have to be able to process thousands, possibly tens of thousands, of these in just a single frame. In some cases, skipping even one frame with those updates would result in noticeable perf degradation. (Some users use Mithril.js to power games, and so they'd need that kind of speed.) Conversely, a keyed list might have to move hundreds if you change sort order, and users would tolerate some noticeable lag. So, as long as it clocks in at no more than about 10us per operation for the whole try/catch, my only complaint would be just the need for that try/catch.

@WebReflection
Copy link

WebReflection commented Nov 25, 2024

it's been mentioned that moveBefore(newNode, refNode, booleanTrap) is undesired and I wonder if this API can at least be changed to be moveBefore(newNode, refNode?, { fallback }) where the third argument, as object, could have a fallback field, or any other better name, to suggest the node should be inserted before, if no move operation can be performed.

If fallback as boolean is not cool (but it's fast, simple, and memory friendly), it could at least be a method that forwards arguments:

parent.moveBefore(
  knownNode,
  refNode?,
  {
    fallback(knownNode, refNode?) {
      return this.insertBefore(knownNode, refNode);
    },
    // OR ...
    fallback: (parent, knownNode, refNode?) => {
      return parent.insertBefore(knownNode, refNode);
    }
  }
)

If such third option is not passed along, let it throw, if it's there, use that fallback without throwing.

If the moveBefore as it's landing won't throw on extra arguments then that perfect name for this API is lost in meaning so that, if even this extra argument as object is not desired, let's think about a better name than moveBefore because right now this should be named moveBeforeMaybeAndThrowIfNotPossible which would better represent the API functionality.

/cc @annevk @noamr thoughts?

@noamr
Copy link
Collaborator

noamr commented Nov 25, 2024

@WebReflection those property bags have a GC impact so they can be more expensive than try/catch blocks.
More likely this would look like two versions: moveBefore and moveOrInsertBefore. It might be OK to deliver both versions in the first go, but we'll have to discuss it with the other browser vendors. The technical complexity of this is negligible, it's just a matter of API design and feature shipping management.

I think the moveBefore version that throws has an important use case that's hard to see when we think of this as a drop-in replacement for existing DOM operations. When we introduce move operations, the developer in some cases should be able to rely on this being a successful move, not just on "best effort to do something better than before".

For example, your video editing app moves youtube iframes around, and wants to absolutely make sure they're not reloaded during the move. If we use the fallback, this will fail silently and the unexpected outcome would be delivered to the user. If we throw, a change in the user code that breaks the move (e.g. moving via a DocumentFragment instead of directly) would throw, alerting the author immediately that their expectation of smooth moves would not be met.

Also when I think about frameworks, they should probably use the throwing version internally (without try/catch) when preserving same-document connected nodes, and be careful not to transfer them through a disconnected DocumentFragment or element. Passing this responsibility to the framework gives the framework responsibility to know when it's actually moving vs inserted. For small websites that don't use frameworks/DOM libraries, I think try/catch is probably fine performance-wise, but as I said having a moveOrInsertBefore variant (early or soon after) would also work.

@WebReflection
Copy link

WebReflection commented Nov 25, 2024

those property bags have a GC impact so they can be more expensive than try/catch blocks

those can all be an always same object to pass around though, mine was an inline example, I would write once that object and pass it every single time, no much GC pressure / impact?

having a moveOrInsertBefore variant (early or soon after) would also work

my thinking is that having moveBefore symmetric with insertBefore is the most desired feature so that it's current behavior that will backfire ... but about your use case, what does the developer do once that error occurs? What are the next steps when such move can't be performed? Silently fail on user surfing side or ... what else?

edit 'cause once again, if we need two APIs to do he same thing I'd rather have the node.canBeMoved accessor or method to make those cases obvious witohut needing at all try/catch around.

@noamr
Copy link
Collaborator

noamr commented Nov 25, 2024

having a moveOrInsertBefore variant (early or soon after) would also work

my thinking is that having moveBefore symmetric with insertBefore is the most desired feature so that it's current behavior that will backfire ... but about your use case, what does the developer do once that error occurs? What are the next steps when such move can't be performed? Silently fail on user surfing side or ... what else?

Probably choose a different code path for moving nodes around.
Envision this:

// Option A
new_parent.moveBefore(iframe, ref_node);

// Option B
const fragment = document.createDocumentFragment();
fragment.moveBefore(iframe, null);
new_parent.moveBefore(fragment, ref_node);

If we use the fallback version, both of these would succeed and would appear to the developer as if they're doing the same thing, while in fact they have a very different effect to the user.
In the throwing version, the developer would know immediately that if they use (B) this is not going to work, so they have to either try/catch it explicitly or use option A and move the nodes directly.

edit 'cause once again, if we need two APIs to do he same thing I'd rather have the node.canBeMoved accessor or method to make those cases obvious witohut needing at all try/catch around.

I have no objections to this.

@WebReflection
Copy link

Option B is obviously using the wrong methods though ... a disconnected fragment (which is all fragments) that uses moveBefore makes no sense ... I'd rather have that method overridden with a throwing for DocumentFragment class but I understand that case could be true for any offline node too (if I understand this API correctly) ... although, we have:

if (parent.isConnected)
  parent.moveBefore(node, ref_node);
else
  parent.insertBefore(node, ref_node);

AFAIK that's not the end of the story though, the operation can fail in other occasions too ... the accessor I've mentioned also wouldn't work, a method such as parent.canMoveNode(node) would be better, still without any need to throw on moveBefore as your use case is probably the edge one, not the most common one, for when moveBefore is desired imho.

@noamr
Copy link
Collaborator

noamr commented Nov 25, 2024

Option B is obviously using the wrong methods though ... a disconnected fragment (which is all fragments) that uses moveBefore makes no sense ... I'd rather have that method overridden with a throwing for DocumentFragment class but I understand that case could be true for any offline node too (if I understand this API correctly) ... although, we have:

makes no sense is exactly it. This is exactly when we throw!

if (parent.isConnected)
  parent.moveBefore(node, ref_node);
else
  parent.insertBefore(node, ref_node);

AFAIK that's not the end of the story though, the operation can fail in other occasions too ... the accessor I've mentioned also wouldn't work, a method such as parent.canMoveNode(node) would be better, still without any need to throw on moveBefore as your use case is probably the edge one, not the most common one, for when moveBefore is desired imho.

It would only fail when moving between connected/disconnected parents, across documents, or when trying nonsensical things like moving comment nodes. What are those "other occasions"?

@WebReflection
Copy link

WebReflection commented Nov 25, 2024

I was referring to these checks #1307 (comment) but now there is a new one needed for comments ... comments nodes are used in both lit-html and my libaries, among others, to pin-point fragments and when these fragments are moved their comment nodes move along without needing to leave the living dom, they are just an indirection ... so it's new to me that comments can't be moved (and why? these are the least problematic thing ever when it comes to repaint/reflow) and that mentioned check should become:

const moveOrInsertNode = (container, node, ref_node = null) => {
  const canMove = (
    container.isConnected &&
    node.isConnected &&
    node.nodeType !== node.COMMENT_NODE &&
    (!ref_node || ref_node.parentNode === container)
  );
  return canMove ?
    container.moveBefore(node, ref_node) :
    container.insertBefore(node, ref_node)
  ;
}

which is starting to become very ugly and a performance hazard due all those checks needed per every single node that would like to be moved ... I don't think a try/catch would perform worse than that ... and still it'd be slower than it needs to be.

If the method needs to do all those checks internally, even a companion method to know if a node can be moved would be duplication of checks and intents ... the fastest best way to have all at once seems to be the third argument then, with a dev-defined callback.

@noamr
Copy link
Collaborator

noamr commented Nov 25, 2024

I was referring to these checks #1307 (comment) but now there is a new one needed for comments ... comments nodes are used in both lit-html and my libaries, among others, to pin-point fragments and when these fragments are moved their comment nodes move along without needing to leave the living dom, they are just an indirection ... so it's new to me that comments can't be moved (and why? these are the least problematic thing ever when it comes to repaint/reflow) and that mentioned check should become:

const moveOrInsertNode = (container, node, ref_node = null) => {
  const canMove = (
    container.isConnected &&
    node.isConnected &&
    node.nodeType !== node.COMMENT_NODE &&
    (!ref_node || ref_node.parentNode === container)
  );
  return canMove ?
    container.moveBefore(node, ref_node) :
    container.insertBefore(node, ref_node)
  ;
}

Sorry, the restriction about comments is for the parent. You can move comment nodes.
(!ref_node || ref_node.parentNode === container) is not related to move, it would also throw for insertBefore.
So you'll be left with canMove = container.isConnected === node.isConnected

@WebReflection
Copy link

So you'll be left with canMove = container.isConnected === node.isConnected

so ... two false would not throw? Interesting

@noamr
Copy link
Collaborator

noamr commented Nov 25, 2024

So you'll be left with canMove = container.isConnected === node.isConnected

so ... two false would not throw? Interesting

Yea, you can move between disconnected parents.
It would only throw when you can't move without side-effects - as in, across documents or when the move would connect/disconnect the element.

@WebReflection
Copy link

to whom it might concern, just adding those two isConnected checks repeatedly gave me ~30% slower results but it's true that for large diffing I can trap isConnected for the parent only once, although in that case I have ~20% slowdown due checks per each node. I am not considering native moveBefore which might produce faster end results, just testing how much those checks can impact performance for more complex scenarios (js-framework-benchmark).

@dead-claudia
Copy link

dead-claudia commented Nov 25, 2024

to whom it might concern, just adding those two isConnected checks repeatedly gave me ~30% slower results but it's true that for large diffing I can trap isConnected for the parent only once, although in that case I have ~20% slowdown due checks per each node.

@WebReflection What time is the "~30% slower" relative to? And same for the "~20% slowdown". Just looking for some perspective here.

@WebReflection
Copy link

multiple moves and we're talking 0.3 VS 0.4 but for benchmarks 0.1 might mean everything ... of course more extensive tests with actually performing moveBefore when possible instead of just insertBefore must be done, right now the moveBefore operation is monkey-patched as insertBefore, see changed file then benchmarked via:

Node.prototype.moveBefore = Node.prototype.insertBefore;

right before the test suite benchmark.

@dead-claudia
Copy link

@WebReflection Sorry, I meant like specific time numbers, like 60k ops/sec or 1 ms/op.

@noamr
Copy link
Collaborator

noamr commented Nov 25, 2024

@WebReflection given that it's established that raw moveBefore has valid use cases, and we can perhaps add other variants or have something like canMoveBefore in addition do you mind opening another issue for these with those benchmarks and what not? This issue is becoming overloaded with too many comments about the same point.

@WebReflection
Copy link

@noamr the whole discussion is about not landing moveBefore as it is so me opening a new discussion won't benefit anyone. I understand your use case (still edgy and not-clear what the developer would end up doing after a throw happens) but I don't like for this method to land in a developer hostile way.

I am also OK to stop discussing it but so far we have most libraries authors not happy about that throwing (using such name that is too similar to insertBefore that does not throw) and, like I've said, a method that duplicates checks on nodes would result as slow as me checking isConnected in an optimized way (i.e. checking the parent is connected once per multiple moves, instead of each time).

If there is no way this method can be renamed to moveBeforeOrThrow as low-level API so that we can discuss in the future a moveBefore that does not throw, I might as well rest my case and by the time I will have benchmarks will be too late to change the current state of this low-level API with a way too high-level name.

Unless explicitly asked to, I will stop commenting on this, or the other, issue.

@slorber
Copy link

slorber commented Nov 25, 2024

FYI React experimental reconcilier integration PR: facebook/react#31596 with preference for not throwing apparently:

The other reason it's still off is because there's currently a semantic breaking change. moveBefore() errors if both nodes are disconnected. That happens if we're inside a completely disconnected React root. That's not usually how you should use React because it means effects can't read layout etc. However, it is currently supported. To handle this we'd have to try/catch the moveBefore to handle this case but we hope this semantic will change before it ships. Before we turn this on in experimental we either have to wait for the implementation to not error in the disconnected-disconnected case in Chrome or we'd have to add try/catch.

To me moveBefore not throwing is preferable. If it lands and throws, this is an irreversible decision reducing the DX for the most common usages. A fail-fast behavior can be opt-in, through an extra option: parent.moveBefore(child, refNode, {fallback: false})

@dead-claudia

This comment was marked as duplicate.

@dead-claudia
Copy link

dead-claudia commented Nov 25, 2024

The benchmark discussion was simply us assessing whether it's fast enough for it to be viable for us to use - if it's too slow, the throwing variant is outright useless to us. And switching to a method that moves instead of removing and inserting would allow us to fix a longstanding animations bug: MithrilJS/mithril.js#2612.

It's incredibly important to me that it meets my performance requirements. And those requirements are stiff: around 5 µs/op firm and 50 µs/op hard. (Soft affords ~1k keyed element moves without dropping frames at 60 FPS, and hard affords ~100 moves.) Slower than that, I simply can't switch to it. And no, this is not hypothetical - check the comment at the top of this file and imagine if someone reverses such a 100-row table's order. (Namesilo's UI currently allows changing sort order, by the way, and they can display up to 250 rows per page.)

By the way, keyed moves have the strictest deadlines for this. Other callers can tolerate as much as 100x slower for insertBefore.

A try/catch can add as much as 0.5 µs/op, and I believe the insertBefore currently clocks in at around 1-3 µs/call on my laptop. This overhead is not negligible. If falling back is needed, elements that don't support moves would likely clock in around 2-5 µs/call if performance turns out to be similar between the two. This means we'd be able to tolerate a dynamic fallback, but only barely.

This is why I'm so concerned and pushy about it. We need the ability to just move, and we need simple node movement to be very fast.

@sorvell
Copy link

sorvell commented Nov 26, 2024

I want to push back lightly on the use cases (e.g. #1255 (comment)) for throwing to try to understand this more. When exactly is it valuable?

We have a number of obvious cases where state can be affected, for example (1) animation, (2) focus, (3) iframe status. We have 2 important nodes, the container to move to and the target moving element. I think that means these permutations:

case container target throws state preserved
1 connected connected no yes yes
2 disconnected disconnected no no no
3 connected disconnected yes no no
4 disconnected connected yes yes no

I can't see any value in throwing for case (3) since there's no relevant state to consider.

That leaves case (4) and the question: is there ever a time where you wouldn't want to remove/disconnect an element to prevent its state from being lost? I'm struggling to find a practical case where that would be desirable.

I also think that moving across documents is a corner case and if we throw there, I suspect people won't care because that likely won't need hot path checking code.

@annevk
Copy link
Member

annevk commented Nov 26, 2024

The way I am looking at this is whether the method performs a move or not. For the past two-and-half decades the DOM has only had insert and remove. And we've only been able to define a move operation for two same-document connected nodes or two same-shadow-including-root disconnected nodes (I suspect that's sufficient for @sebmarkbage's case cited above). Note that for the connected case it also comes with a new custom element callback (this is not offered in the disconnected case because custom elements never fire callbacks there; only built-in elements appear to have a need for that thus far). Making it implicitly fallback to different semantics prevents us from building upon it in the future. As people would rely on it having insert or remove behavior in a certain set of scenarios. And it would also be rather magical to change the semantics under the hood. You could perhaps have a method moveBeforeOrMaybeInsert() but that seems way better to leave to libraries. As again it would lock us into certain behavior we know we don't want.

It will be interesting to see the performance aspects explored more. Once there is more data there we should certainly investigate why a couple of conditionals are so expensive. And perhaps adoption of abstractions is so overwhelming that it's indeed compelling to introduce moveBeforeOrMaybeInsert() at some point, but not before we are much more confident about the problem space.

Also, let me say that this issue has become quite unreadable. Adding way too many duplicate comments (this includes repeating what other people already said, to be clear) in just a couple of days is not a good way to make your case. There's a 160 people watching this repository. We all owe it to them to be more considerate of their time and attention.

@WebReflection
Copy link

WebReflection commented Nov 26, 2024

last question from me: is it possible to rename and land this method as moveBeforeAtomic(child, ref?), keeping it exactly as it is (throwing), to have room in the future for a moveBefore that is more developer friendly for the 98% of presented use cases, where an optional third argument such as { atomic: true } would still throw whenever that is desired? The disambiguation will be in the name where atomic would explain the throwing nature when such operation would not be possible.

edit to not bother more ... but ...

For the past two-and-half decades the DOM has only had insert and remove

This is the exact reason most of us are against landing a completely new thing under a name that undermines its most meaningful behavior from veterans to new-comers. If this is all new, and somehow experimental, because:

It will be interesting to see the performance aspects explored more

... but in the meantime the most desirable name for such API has been doomed forever, I strongly believe there is all the time we want, and need, to explore such new behavior/API under a name that is not representing, in the name, meaning, and intent, what most Web developers would expect from such API: let it be moveBeforeAtomic to signal that!

As quick reminder, the most successful and popular Websites out there have infinite lists and/or tabular data, all cases where nodes get inevitably and quickly out and back into the living DOM ... a sorted or filtered table is the same, it's not always a reordering of a to-do list and as mentioned before:

For small websites that don't use frameworks/DOM libraries, I think try/catch is probably fine performance-wise

so that it feels like this API has not "scaling" in mind, when it comes to performance.

Moreover, the state-preserving feature of this API has been overlooked as the only desired outcome when in practice X, and similar projects, FB, and similar project, or even GitHub that is rendering code views in split chunks of nodes, won't benefit at all from the ergonomics, the duplicated checks on both client side and inevitably on the native browser engine too, to do what is the most common thing everyone does these days: stream of visual data to consume.

but that seems way better to leave to libraries

Agreed, but only if libraries have a way to hook carelessly into the fastest-possible path, because otherwise this API is asking consumers of such API to add bloat to every page the world is surfing these days, because it couldn't literally compromise on anything, even if every developer that paid attention (I even suggested insertBeforeAtomic because insertBefore is the single thing that covers it all and now I regret that very first contribution) has been so loud about not being happy you had to play the "stop commenting on this" card when you, in the first place, asked me to comment in the related issue, and others just followed, or did that before me.

I understand there are many people involved in this project, and while repeated reasons from different developers are annoying, because repeated, the fact different developers repeatedly stated they didn't like the throwing should be visible to all others involved in this effort.

It's a new thing, it's potentially a game-changer only if it lands right ... and until it does, please do not nuke the most spot-on name that will backfire the day after it'll land out there.

Thanks to anyone patience enough to read this edit. I hope for the best, which is everything but moveBeforeOrInsertMaybe meme I can already see after landing.

edit 2 as mentioned in the comment after, if moveBefore remains the name, node.moveBefore(reference) is also doomed as one would throw, the other one hopefully not? ... but then again, if it doesn't, it will create already confusion, so that this spot-on naming choice (for dev intents) affect both the present and the future of the DOM ... please be careful and take full responsibility for the currently chosen name over a new field nobody has any concrete results about. Please change name already if throwing is desired, for a name shaped after 20 years of non-throwing legacy (insertBefore).

@titoBouzout

This comment was marked as duplicate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
a11y-tracker Group bringing to attention of a11y, or tracked by the a11y Group but not needing response. addition/proposal New features or enhancements needs implementer interest Moving the issue forward requires implementers to express interest stage: 2 Iteration
Development

No branches or pull requests