Skip to content

should there be a metric that forces frameworks to do generalized list reconciliation? #1198

@leeoniya

Description

@leeoniya

allow me to explain...

back in the day, when this benchmark only compared vdom-based frameworks, the "swap rows" metric was there specifically to trigger a pathological case that messed up inefficient list reconciliation, for example when diff algorithms only did rudimentary list prefix/suffix testing and fell back to mutating everything in the middle. this is why growing the swap distance in 2017 had a major impact on the standings.

what we have today (i think) with the addition of fine-grained reactivity is a side-stepping of this intent, where the frameworks' store or proxy primitives directly map imperative mutations, bypassing the need to actually diff the whole list for changes, effectively behaving more like jQuery plus a thin layer of data<>dom binding in between. one such example:

source.move(1, 998);
source.move(998, 1);
}

https://github.com/xania/view/blob/8bb5eb49641906710b0eb9c669072a8105986b64/packages/view/lib/core/list/list-source.ts#L188

remember when mikado had a dedicated swap() method? #654

in 99% of real applications, you're going to have a database and a backend with a json API that returns a list of objects or structure that cannot be cheaply mapped by referential integrity as you can do when everything just lives in JS memory. imagine an API call to the backend that returns a list where 80% of objects have the same id, 10% are missing, 5% are highlighted and 5% are re-ordered or new. you now need to run a generalized list diff algo to properly map and mutate the correct objects by some key/id. React and vdom frameworks handle this generically, and this is a case when you cannot directly write the pre-defined mutation logic into the app, it has to be inferred from an in-memory list and a new list of items from a `fetch('/list/query').then(r => r.json()) api call.

i'm not sure how many frameworks here do this imperative swapping, it's certainly not all. but maybe we need to have a metric that measures the above common case where frameworks are forced to do full reconciliation of an unpredictable/complex set of mutations based on re-ordering of keys, removal of keys, updating of items, and insertion of new items from a plain js array that is received over fetch/json API?

thoughts? @krausest @ryansolid @fabiospampinato

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions