Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do the benefits of signals as a language feature outweigh the costs of solidification into a standard? #220

Open
devmachiine opened this issue May 12, 2024 · 30 comments

Comments

@devmachiine
Copy link

I think its great that there is a drive towards standardization of signals, but that it is too specialized to be standardized here

Q: Why are Signals being proposed in TC39 rather than DOM, given that most applications of it are web-based?

A: Some coauthors of this proposal are interested in non-web UI environments as a goal, but these days, either venue may be suitable for that, as web APIs are being more frequently implemented outside the web. Ultimately, Signals don't need to depend on any DOM APIs, so either way works. If someone has a strong reason for this group to switch, please let us know in an issue. For now, all contributors have signed the TC39 intellectual property agreements, and the plan is to present this to TC39.

(please let us know in an issue - that's what this issue is for)

Signals is a interesting way to approach state management, but it is much more complicated of a concept compared to something like a PriorityQueue/Heap/BST etc, which I think would be more generally useful as part of javascript itself.

What problem domains besides some UI frameworks would benefit out of it? Are there examples of signals as part of a language feature in other programming languages ?

What would be the benefit of having signals baked-in as a language feature over having a library for doing it?

When something is part of a standard, it's more work & time involved to make changes/additions, than if it was a stand-alone library. For signals to be a part of javascript, I think there would have to be a big advantage over a library.

I can imagine some pros being

  • Increased performance
    • Higher than an order of magnitude?
    • How much CPU does signal-related processing consume on a website?
  • Easier for a few web frameworks and "Some coauthors" to use signals

Benefits being part of javascript, the same being true if it is part of the DOM api instead

  • Less code to ship
  • Better debugging posibilities

I can imagine some cons being

  • Not being generally useful for the vast majority of javascript programmers
  • Changes and improvements take longer to implement across all browsers
  • Not being used after a decade, when we discover other approaches to reactive state mangement
Q: Isn't it a little soon to be standardizing something related to Signals, when they just started to be the hot new thing in 2022? Shouldn't we give them more time to evolve and stabilize?

A: The current state of Signals in web frameworks is the result of more than 10 years of continuous development. As investment steps up, as it has in recent years, almost all of the web frameworks are approaching a very similar core model of Signals. This proposal is the result of a shared design exercise between a large number of current leaders in web frameworks, and it will not be pushed forward to standardization without the validation of that group of domain experts in various contexts.

I think in the very least a prerequisite for this as a language feature should be that almost all of the web frameworks use a shared library for their core model of Signals, then it would be proven that there is a use-case for signals as a standard, and much easier to use that shared library as an API reference for a standard implementation.

If anyone could please elaborate more on why signals should be a language feature instead of a library, this issue could serve as a reference for motivation to include it in javascript. 🙃

@dead-claudia
Copy link
Contributor

  • Increased performance
    • Higher than an order of magnitude?
    • How much CPU does signal-related processing consume on a website?
  • Easier for a few web frameworks and "Some coauthors" to use signals

Benefits being part of javascript, the same being true if it is part of the DOM api instead

  • Less code to ship
  • Better debugging posibilities

I can imagine some cons being

  • Not being generally useful for the vast majority of javascript programmers
  • Changes and improvements take longer to implement across all browsers
  • Not being used after a decade, when we discover other approaches to reactive state mangement
Q: Isn't it a little soon to be standardizing something related to Signals, when they just started to be the hot new thing in 2022? Shouldn't we give them more time to evolve and stabilize?

A: The current state of Signals in web frameworks is the result of more than 10 years of continuous development. As investment steps up, as it has in recent years, almost all of the web frameworks are approaching a very similar core model of Signals. This proposal is the result of a shared design exercise between a large number of current leaders in web frameworks, and it will not be pushed forward to standardization without the validation of that group of domain experts in various contexts.

I think in the very least a prerequisite for this as a language feature should be that almost all of the web frameworks use a shared library for their core model of Signals, then it would be proven that there is a use-case for signals as a standard, and much easier to use that shared library as an API reference for a standard implementation.

If anyone could please elaborate more on why signals should be a language feature instead of a library, this issue could serve as a reference for motivation to include it in javascript. 🙃

@devmachiine @mlanza You might be surprised to hear that there is a lot of precedent elsewhere, even including the name "signal" (almost always as an analogy to physical electrical signals, if such a physical signal isn't being processed directly). I detailed an extremely incomplete list in #222 (comment), in an issue where I'm suggesting a small variation of the usual low-level idiom for signal/intereupt change detection.

It's slow making its way to the Web, but if you squint a bit, this is only a minor variation of a model of reactivity informally used before even transistors were invented in 1926. Hardware description languages are likewise necessarily based on a similar model.

And this kind of logic paradigm is everywhere in control-oriented applications, from robotics to space satellites. And almost out of necessity.

Here's a simple behavioral model of an 8-bit single-operand adder-subtractor in Verilog, to show the similarity to hardware.

And yes, this is fully synthesizable.

// Adder-subtractor with 4 opcodes:
// 0 = no-op (no request)
// 1 = stored <= in
// 2 = out <= stored + in
// 3 = out <= stored - in
module adder_subtractor(clk, rst, op, in, out);
  input clk, rst;
  input[1:0] op;
  input[7:0] in;
  output reg[7:0] out;

  reg[7:0] stored = 0;
  always @ (posedge clk) begin
    if (rst)
      stored <= 0;
    else case (op)
      2'b00 : begin
        // do nothing
      end
      2'b01 : begin
        stored <= in;
      end
      2'b10 : begin
        out <= stored + in;
      end
      2'b01 : begin
        out <= stored - in;
      end
    endcase
  end
endmodule

Here's an idiomatic translation to JS signals, using methods instead of opcodes:

class AdderSubtractor {
    #stored = new Signal.State(0)

    reset() {
        this.#stored.set(0)
    }

    nop() {}

    set(value) {
        this.#stored.set(value & 0xFF)
    }

    add(value) {
        return (this.stored.get() + value) & 0xFF
    }

    subtract(value) {
        return (this.stored.get() - value) & 0xFF
    }
}

To allow for external monitoring in physical circuits, you'll need two pins:

  • A "notify" output signal raised high whenever connected circuits need alerted.
  • An optional "notify ack" input that clears that output signal when raised high. (Sometimes needed, but not always.)

Then, consuming circuits can detect the output's rising edge and handle it accordingly.

This idiom is very common in hardware and embedded. And these aren't always one-to-one connections.

Here's a few big ones that come to mind:

  • Reset buttons are normally connected to several circuits in parallel. And these are often wired up to both main power (with an in-circuit delay for stability) and dedicated reset buttons, making it a many-to-many connection. When this rises, the circuit is reset to some default state.
  • SMBus has a one-way host clock wire, but also a two-way data wire that both a host and connected devices can drive. This two-way data wire could be thought of as a many-to-many connection as sometimes (though rarely) you even see such busses have more than one host on them, complete with the ability to drive them. And the spec does provide for a protocol for one host to take over for another.
  • SMBus interrupt signals (SMBALERT#) are normally joined together (a "wired OR" connection in electronics jargon) and connected to an alert input/GPIO pin in a host MCU. This lets connected sensors tell the host to re-poll them while ensuring the host continues to retain full control over the bus's clock. (This side channel is needed to avoid the risk of high-address devices becoming unable to notify a host due to losing arbitration to frequent host or low-address talk.)

It's not as common as you might think inside a single integrated circuit, though, since you can usually achieve what you want through simple boolean logic and (optionally) an internal clock output pin. It's between circuits where it's most useful.

Haskell's had similar for well over a decade as well, though (as a pure functional language) it obviously did signal composition differently: https://wiki.haskell.org/Functional_Reactive_Programming

And keyboard/etc events are much easier to manage performantly in interactive OpenGL/WebGL-based stuff like simple games if you convert keyboard events to persistent boolean "is pressed" states, save mouse position updates to dedicated fields to then handle deltas next frame, and so on. In fact, this is a very common way to manage game state, and the popularity of just rendering every frame like this is also why Dear Imgui is so popular in native code-based games. For similar reasons, that library also has some traction in highly interactive, frequently-updating native apps that are still ultimately window- or box-based (like most web apps).

If anything, the bigger question is why it took so long for front end JS to realize how to tweak this very mature signal/interrupt-based paradigm to suit their needs for more traditional web apps.

@dead-claudia
Copy link
Contributor

As for other questions/concerns:

  • Signal performance and memory usage both would be improved with a native implementation.

    • Intermediate array/set objects could be avoided, saving about 12 bytes per signal in V8.
    • Advance knowledge that it's just object reference equality and the fact it's so few values means the watcher set can just be an array, and array search is just a matter of finding a 32-bit value. This also saves 4 bytes per active watcher, at the cost of making it slower for large numbers of watchers (though after a certain size, like maybe 8 watchers, it could be promoted to a proper Set data anyways).
    • With the above, watcher add, remove, and notify would only need one load to get the backing array to iterate. Further, iteration on set doesn't need to go through the ceremony of either set.forEach or set.values(). If you split between small array and large set, you could even use the same loop for both and just change the increment count and start offset (and skip on hole), a code size (and indirectly performance) optimization you can't do in userland.
    • For my proposal in Watcher simplification #222, one could save an entire resolver function worth of overhead, speeding up the notification process to be about the same as queueMicrotask(onNotify). You don't even need to allocate resolver functions at all, just the promise and its associated promise state.
  • As for utility, it's not generally useful to most server developers. This is true. It's also of mixed utility to game developers. It is somewhat niche. But there's two points to consider:

    1. It would be far from the first niche proposal to make it in, and there's functionality even more niche than this. Atomics are very niche in the world of browsers. And the receiving side of generators (especially of async generators) is also very niche. I've never used the first in the wild, and I almost completely stopped using the second when async/await became widely supported.
    2. Just re-read the second paragraph of this comment. That history suggests to me that this has a lot more staying power than you'd think at first glance. It's similar to the preference of message passing over shared memory for worker threads in both Node and the Web - it took off as a fad on the server with microservices, but those microservices became the norm as, for most, it's objectively easier both to secure and to scale. (Only caveat: don't prematurely jump to Kubernetes.) In browsers, Redux is essentially this same kind of message passing system, but for state management.
  • The proposal is intentionally trying to stay minimal, but still good enough to make direct use possible. Contrast this with URL routing, where on the client side, HTML spec writers only added a very barebones history API and all but required a client-side framework to fill in the rest.

    • They recently added a URL pattern implementation, but most routers' code is in other areas, and URL matching is nobody's bottleneck.
    • This was far from the only Web API addition to rely on magic framework pixie dust to save the day - web components, service workers, and HTTP/2 push are a few more that come to mind. Even the designers behind Trusted Types left non-framework use to a near afterthought.

I think in the very least a prerequisite for this as a language feature should be that almost all of the web frameworks use a shared library [...]

A single shared library isn't on its own a reason to do that. And sometimes, that library idiom isn't even the right way to go.

Sometimes, it is truly one library, and the library has the best semantics: async/await's semantics came essentially from the co module from npm, and almost nothing else came close to it in popularity. Its semantics were chosen as it was the simplest and soundest, though other mechanisms were considered. (The syntax choice was taken from C# due to similarity.) But this is the exception.

Sometimes, it's a few libraries dueling it out, like Moment and date-fns. The very heavy (stage 3) temporal proposal was created to ultimately subsume those with a much less error-prone framework for dates and times that's clearly somewhat influenced by the Intl APIs. This is still not the most common case, though.

Sometimes, it's numerous libraries offering the same exact utility, like Object.entries and Object.fromEntries both being previously implemented in Lodash, Underscore, jQuery, Ramda, among so many others, I gave up years ago even trying to keep track of the "popular" ones with such helpers. In fact, both ES5's Object.keys and all the Array prototype methods added from ES5 to today were added while citing this same kind of extremely broad library and helper precedent. CoffeeScript of old even gave syntax for part of that - here's each of the main object methods (roughly) implemented in it:

Object.keys = (o) ->
    (k for own k, v in o)

Object.values = (o) ->
    (v for own k, v in o)

Object.entries = (o) ->
    ([k, v] for own k, v in o)

Object.fromEntries = (entries) ->
    o = {}
    for [k, v] in entries
        o[k] = v
    o

Speaking of CoffeeScript, that's even inspired additions of its own. And there's been many cases of that and/or other JS dialects inspiring additions.

  • The decorators proposal was first experimented on using a custom Angular-specific JS superset called AtScript, and after that got folded into Babel, it later eventually found its way into TypeScript under a compiler option.
  • Classes were inspired by a mix of CoffeeScript and early TypeScript. TypeScript clearly inspired the syntax (few changes occurred), but CoffeeScript heavily influenced the semantics (as shown by the relatively few keywords). This inspiration continued even into private fields, where instead of using the private reserved word, a sigil was used instead.
  • Optional chaining and nullish coalescing was in CoffeeScript back in 2010 (Groovy, a JVM language, beat it to the punch in 2007, but I couldn't find any others before CoffeeScript), with literally the same exact syntax (and semantics) it has in JS today. Even optional function calls and assignment have a direct equivalent. Only differences are JS uses a ?? b and a ??= b where CoffeeScript uses a ? b and a ?= b, and JS uses ?. for calls and bracketed accesses as well (to avoid ambiguity with ternary expressions). No a? shorthand was added for a == null, though - that was explicitly rejected.

There's also other cases (not just temporal) where existing precedent was more or less thrown away for a clean sheet re-design. For one glaring example, iterables tossed most existing precedent. Anything resembling symbol-named methods are used by nobody else. Nobody had .throw() or .resume() iterator methods. .next() returns a special data structure like nobody else. (Most use "has next" and "get next and advance", Python uses a special exception to stop, and many others stop on null.) Library precedent centered around .forEach and lazy sequences, which was initially postponed with some members at the time rejecting it (this has obviously since changed). JS generators are full stackless coroutines able to have both yield and return values, but do did Python about a decade prior, so that doesn't explain away the modeling difference.

@dead-claudia
Copy link
Contributor

It's not that I don't think signals are a staple of development. They most certainly are, at least for me. I just think you might get some pushback on what makes sense for the primitives.

For context, I myself coming in was hesitant to even support the idea of signals until I saw this repo and dug deeper into the model to understand what was truly going on.

And yes, there's been some pushback. In fact, I myself have been pushing back on two major components of the current design:

  • I currently find the justification of why the Signal.subtle namespace exists pretty flimsy, and I feel it should be flattened out instead. (TL;DR: the crypto.subtle analogy is weak, and all other justifications I've seen attempted are even less persuasive for me.)
  • The change detection mechanism's current registry-based design (using a Watcher class) has a number of issues, both technical and ergonomic. (I'll omit specifics here for brevity - they are very nuanced.)

I also pushed back against watched/unwatched hooks on computeds for a bit, but since backed off from that

I've also been pushing hard for the addition a secondary tracked (and writable) "is pending" state to make async function-based signals definable in userland.

@dead-claudia
Copy link
Contributor

@mlanza Welcome to the world of the average new stage 1 proposal, where everything is wildly underspecified, somehow both hand-wavy and not, and extremely under flux. 🙃

https://tc39.es/process-document/ should give an idea what to expect at this stage. Stage "0" is the wildest dreams, and stage 1 is just the first attempt to bring a dose of reality into it.

Stage 2 is where the rubber actually meets the road with most proposals. It's where the committee has solidified on a particular solution.

Note that I'm not a TC39 member. I happen to be a former Mithril.js maintainer who's still somewhat active behind the scenes in that project. I have some specific interest in this as I've been investigating the model for a possible future version of Mithril.js.

@devmachiine
Copy link
Author

A single shared library isn't on its own a reason to do that. And sometimes, that library idiom isn't even the right way to go.

Good point, I agree. I can see the similarity between game designers and gamers who propose balances/changes which wouldn't benefit the game(rs) as a whole and/or have unintended consequences.

everywhere in control-oriented applications .. very common in hardware and embedded

Interesting. Especially the circuitry example! Because X exists in Y isn't enough justification on its own to include X in Z. I don't think javascript is geared towards those use cases, it's more in the domain of c/zig.

As for utility, it's not generally useful to most server developers. This is true. It's also of mixed utility to game developers. It is somewhat niche. But there's two points to consider: It would be far from the first niche proposal to make it in, and there's functionality even more niche than this. Atomics are very niche in the world of browsers.

Good point! I found the same to be true with the Symbol primitive. For years, I didn't really get it, but once I had a use case, I loved it.

I reconsidered some of the pro's of signals being baked into the language(or DOM api) which I stated
Less code to ship
Even if signals usage is ubiquitous, technically less bytes are sent over the wire, but practically not.
Performance
Technically yes, but practically? If a substantial portion of compute is taken up by signals processing, its probably a simulation or control-oriented application, and here I think it's out of domain scope for javascript again.

recall there is a proposal for adding Observables to the runtime, which is directly related to signals
https://github.com/tc39/proposal-observable

There were similar concerns regarding the conclusion of not moving forward with the Observable proposal

I think there will be a lot of repeat discussion:

Why does this need to be in the standard library? No answer to that yet.
Where does this fit in? The DOM
Are there use cases in Node.js?
(examples of DOM apis that make sense in DOM and Node, but not in language)
Concerns about where it fits in host environments
Stronger concerns: will this be the thing actually _used_ in hosts?

I can appreciate the standardization of signals, but I'm not convinced that tc39 is the appropriate home for signals. The functionality can be provided via a library, which is much easier to extended and improve across contexts.

@dead-claudia
Copy link
Contributor

Technically yes, but practically? If a substantial portion of compute is taken up by signals processing, its probably a simulation or control-oriented application, and here I think it's out of domain scope for javascript again.

@devmachiine You won't likely see .set show up on performance profiles, but very large DOM trees (I've heard of trees in the wild as large as 50k elements, and had to direct someone with 100k+ SVG nodes to switch to canvas once) using signals and components heavily could see .get() showing up noticeably.

But just as importantly, memory usage is a concern. If you have 50k signals in a complex monitoring app (say, 500 items, with 15 discrete visible text fields, 50 fields across 4 dropdowns, 20 error indicators, and 5 inputs), and you can shave off an average of about 20 bytes of each of those signals by simply removing a layer of indirection (2x4=8 bytes) and not allocating arrays for single-reference sets (2x32=64 bytes per impacted object, conservatively assumes about 20% are single-listener + single-parent), you could've shaved off around entire entire megabyte of memory usage. And that could be noticeable.

@justinfagnani
Copy link
Collaborator

To me, the biggest advantage of this being part of the language is interoperability.

If you want multiple libraries and UI components to interoperate by being able to watch signals in a single way, a standard feature is the only way to accomplish that. It's infeasible to have every library use the same core signals library, and eliminate all sources of duplication (from package managers, CDNs, etc) which would bifurcate the signal graph.

@justinfagnani
Copy link
Collaborator

So to restate, perhaps a little differently, just give us the optimized primitives, not a replete, one-size-fits-all signal library.

Well, I also think that the built-in APIs should be as ergonomic as reasonably possible. We shouldn't require that a library be used to get decent DX.

I personally think the current API is near a sweet spot because it makes the common things easy (state and computed signals), and the complex things possible (watchers). Needing utility code for watchers makes sense, but IMO basic signal creation and dependency tracking should be usable with the raw APIs.

In this way, even the current library on offer could be built from these primitives

I struggle to think of what lower-level primitives could be useful. You need centralized state for dependency tracking. Maybe you could separate a signals local state from its tracked state - say, State doesn't contain it's own value - but you still need objects of some sort to store dependency and dirtiness data. I don't know what a lower-API would even get you over the current State and Computed.

@justinfagnani
Copy link
Collaborator

@mlanza To be concrete, what would your Atom implementation look like under the current API, vs an alternative that may be lower-level? Is State a hinderance?

@justinfagnani
Copy link
Collaborator

First, what do you mean by is state a hinderance?

I mean, is the class Signal.State more difficult to use to implement your Atom than whatever a lower-level primitive might be? Can we compare the current API against a hypothetical one?

@justinfagnani
Copy link
Collaborator

I'm just trying to get concrete here. Are these primitives in the current proposal minimal and complete? What would be lower level?

You seemed to be saying that the current API could be built from lower-level primitives. So I'm asking, what are those lower-level primitives? And how does your library look like implemented with the current proposed API vs those lower-level primitives?

@jlandrum
Copy link

If we were talking significant performance improvements I'd have a significantly different opinion on this, but we aren't. I don't see any of the examples given here as convincing of the need for baking a concept into the language as a feature.

Sure, some languages have signals - many don't. Many that don't have their own messaging/queue systems.

When I look at proposals to any language, framework, etc., there's two criteria that I consider:

  • What problem will this solve that cannot already be solved?
  • Will the balance between complexity and maintainability be maintained?

This proposal fails on both criteria, IMO. Signals can already be implemented - as they have been in so many languages that all have their own pros and cons. Sure, they may adopt native Signals internally, but now they both have to work around the limitations that are not intrinsic to the language to provide the same experience and their value add no longer is signals but /their/ view of signals which may already have countless in agreement on as being optimal.

So if/when they get added, the JavaScriptCore, SpiderMonkey and V8 teams will have to implement them in a compatible way. The Chromium, Safari and Firefox teams will have to validate and ensure their functionality within their browser environments. The Bun, Deno and NodeJS teams will have to validate and ensure their functionality within their non-browser environments. If even one of these gets something wrong, it leads to fragmentation, which leads to poly fills, which leads to additional complexity for something that already exists today and can be used with relative ease even without the use of third party libraries.

Performance is quite frankly the only argument here and it's quite weak (again, just my opinion). A better approach would be to identify the specific issues that signals libraries face and try to solve for those - which often leads to resolving issues outside of the topic at hand while keeping ECMAScript from becoming a bloated mess of hundreds of reserved keywords and features that lead to the last 5 lines of code looking like a completely different language than the next 5.

@justinfagnani
Copy link
Collaborator

justinfagnani commented Oct 17, 2024

The thing that's missing for me in those two criteria is interop. Many language's standard library features could be done in userland, but gain a lot of ecosystem value when packaged with the language. Maps can be implemented in libraries of course, but if you have a standard interface then lots of code can easily interact via the standard Map.

Signals has an even bigger interop benefit by being built-in than utility classses because of its shared global state. It would be difficult to bridge multiple userland signals graphs correctly. By being built in code can use standard signals and be useful to anyone working in any library or framework that uses standard signals. You can see this already with the signal-utils package.

Additionally on the web a very big benefit of being a standard is being able to be relied on by other standards. There's a lot of potential in having the DOM support signals directly in some way, but that's only possible if signals are either built into JS or built into the DOM.

@jlandrum
Copy link

@justinfagnani then where do you draw the line? JSX? Type checking? JavaScript shouldn’t be an “ecosystem,” we shouldn’t be looking to add libraries as core language features unless there’s significant challenges that can’t otherwise be solved and that simply isn’t the case here.

Map solves significant limitations in the language that can’t be solved in userland without significant drawbacks. The memory cost of a userland approach alone justifies its existence as a language feature.

@NullVoxPopuli
Copy link
Collaborator

Memory, speed, and interop are the three huge benefits i'm expecting with built-in signals.

@jlandrum
Copy link

Memory can only be so optimized in a generalized system (hence my call to reconsider signals as a whole opposed to adding features that better facilitate signals)

Speed can only be optimized so far as making certain assumptions, which frameworks can often make better assumptions.

Interoperability isn’t particularly an issue and I’ve yet to see good examples of how this could provide better interoperability.

@EisenbergEffect
Copy link
Collaborator

Reactive data interoperability has been a huge issue, in my experience. Unfortunately, I can't give the details around most of that due to NDAs. But it's a very serious issue. Standard signals would be worth it to me and my stakeholders even if the only thing it delivered was interoperability and none of the memory or performance hopes were realized.

@jlandrum
Copy link

@EisenbergEffect signals isn't synonymous with reactivity, signals enable reactivity but it's not everything that encompasses signals, as do many proposals that IMO were far better approaches to solving this issue than in-built signals.

Signals are reactivity, messaging, concurrency, data management, so on and so forth. Using signals just to address reactive processes is overkill.

@justinfagnani
Copy link
Collaborator

Interoperability isn’t particularly an issue

I think there are a lot of people who would disagree with that.

@dead-claudia
Copy link
Contributor

signals isn't synonymous with reactivity, signals enable reactivity but it's not everything that encompasses signals, as do many proposals that IMO were far better approaches to solving this issue than in-built signals.

Signals are reactivity, messaging, concurrency, data management, so on and so forth. Using signals just to address reactive processes is overkill.

@jlandrum

  1. Signals aren't concurrent. They have zero interaction with promise job queues, and they don't manage anything related to queues.
  2. Messaging is crucial to reactivity in any purely discrete context. Without messages of some kind, there's nothing to react to in a discrete world.
  3. Many DOM APIs would be far simpler and easier to use if exposed as signals, and signals necessarily must be either in JS or WHATWG for that to happen. Here's some of those:
    • Clipboard state
    • Drag and drop state
    • Hover state
    • Location state (would essentially remove the need for routing frameworks)
    • DOM ready state
    • Pointer state
    • Media buffering state
    • Much of Bluetooth
    • Sensor state
    • Parts of the WebRTC API

@jlandrum
Copy link

@dead-claudia this still doesn't address the key point, and if anything only makes things worse for signals.

Signals - the concept - can use concurrency. Any person implementing this spec could choose to use concurrency at the engine level. Some signals libraries use concurrency. My point is that even in the proposal, native Signals do not aim to implement but a small fraction of what many libraries offer, and the idea that "It's not intended for end users but library developers" is a massive red flag.

The point everyone keeps landing on is messaging - and yes, we do need better messaging. Many proposals mentioned were designed to address this but abandoned. Adding signals is not how we get there IMO.

As for making DOM APIs exposed to signals, that is a massive and drastic reworking of much of the API that can have some serious repercussions. Making various APIs return Signals would most certainly have performance penalties. If a library wants to do this - that's fine, the person using the library has accepted the ramifications. Most of these can easily - with minimal code - be handled with event listeners.

I've yet to see a compelling example of built-in signals being anything more than a QOL feature only for some people and nothing more.

@dead-claudia
Copy link
Contributor

@jlandrum Replying in multiple parts.

@dead-claudia this still doesn't address the key point, and if anything only makes things worse for signals.

Signals - the concept - can use concurrency. Any person implementing this spec could choose to use concurrency at the engine level. Some signals libraries use concurrency.

In theory, yes. In practice, the spec and entire core data structure precludes implementation concurrency. In fact, the very in-practice lack of concurrency also makes async/await integration rather awkward.

My point is that even in the proposal, native Signals do not aim to implement but a small fraction of what many libraries offer, [...]

About the only thing this doesn't natively offer 1. that frameworks commonly do and 2. isn't deeply specific to the application domain (like HTML rendering) is effects, and that's by far one of the biggest feature requests in the whole repo.

But even that is very straightforward:

function effect(fn) {
    const tracker = new Signal.Computed(() => { fn() })
    // Run this part in your parent computed
    return function track() {
        queueMicrotask(() => tracker.get())
    }
}

[...] and the idea that "It's not intended for end users but library developers" is a massive red flag.

It's not that signals are not intended for use by end users. End user usage is being considered, just not as a sole concern.

Syntax is being considered in the abstract to help simplify usage by end users. It's just been deferred to a follow-on proposal, like how async/await was a follow-on rather than immediately part of ES6/ES2015.

The point everyone keeps landing on is messaging - and yes, we do need better messaging. Many proposals mentioned were designed to address this but abandoned. Adding signals is not how we get there IMO.

Keep in mind, signals aren't a general-purpose reactivity abstraction. You can't process streams with them, for one. (That's what async generators and callbacks are for.)

They're specifically designed to simplify work with singular values that can change over time, and only singular values.

As for making DOM APIs exposed to signals, that is a massive and drastic reworking of much of the API that can have some serious repercussions. Making various APIs return Signals would most certainly have performance penalties. If a library wants to do this - that's fine, the person using the library has accepted the ramifications. Most of these can easily - with minimal code - be handled with event listeners.

This is incorrect. Browsers can avoid a lot of work:

  • They can track signal outdated state and mark them as outdated. Together with using double buffering for transmitting updated values, browsers can do concurrent, non-blocking notification of changes very efficently.
  • They can use a lighter event listener that doesn't allocate a full, cancellable event object. Because signals won't be immediately accessible, spammy updates like scroll updates can be efficiently coalesced, significantly speeding up UI updates. (Browsers already coalesce pointer events for performance reasons.)

And much of the stuff I listed before can uniquely be accelerated further:

  • Input value

The input's value can be read directly from the element, and a second "scheduled value" and a third "external input" value can be used to track the actual external state.

  • On user input and input.value write, the user input value would be updated. Then, rhe "scheduled value" would be swapped with the user input value, and if it's null, a microtask would be scheduled to mark that signal as having an update. No UI locks needed.
  • On signal get and input.value read, the scheduled value is swapped with null, and the input value is set to that.

Combined with preventDefault being a boolean property on the signal, it can fully bypass the JS event loop.

  • Drag and drop state
  • Pointer state
  • Hover state

Pointer and mouse button/etc. state could be read directly, using double buffering. Unless coalesced values are needed, this is just a simple ref-counted object.

  • Location state ([...])

Near identical situation to inputs, just the source of updates (navigation rather than user axtion) is different.

  • DOM ready state

Near identical situation to inputs, just the source of updates (navigation rather than user axtion) is different and it only needs to double-buffer a 1-byte enum primitive rather than allocate a whole structure.

@jlandrum
Copy link

@dead-claudia perhaps I need a more concrete example of what this might look like for these examples.

Will it replace existing properties? If so, this would potentially break code not looking to use signals if they're trying to get a property of a DOM element as a string and instead get a Signal.

Will they be additional properties? If so, we already have a lot of property bloat to handle legacy code.

Will the properties be somehow convertible to Signals? If so, what about mixed use code bases? And how do they internally know that a non-reactive property needs to update on change?

What's being described sounds like a layer on top of the current ecosystem too - in which it most certainly would increase resource usage not reduce it, especially in many cases where a simple object with a getter/setter/subscription model would suffice as well as offer the flexibility of asynchronous updates.

I greatly appreciate the write up and explanations but this still sounds like something that promises one thing but will in reality be something completely different.

@dead-claudia
Copy link
Contributor

@jlandrum First and foremost, these would never replace existing properties. New properties would necessarily need to be used in browsers.

There's not much property bloat in JS, but the ship for treating the property bloat problem sailed before we even had the problem in the first place. There's been very few things removed from browsers after being once broadly available, and only because of one of a few reasons:

  1. They presented an active and severe security risk to users. Java applets and Flash embeds are the most prominent examples of this, but others like VBScript/JScript also presented similar risks. HTML APIs, both new and old, that have similar risks are also generally restricted to first-party HTTPS frames.
  2. They presented serious and severe performance issues. Sync XMLHttpRequests and arguments.caller come to mind here.
  3. They fell into complete disuse, even in legacy websites. Most obsolete elements in general are elements that fell out of use very early on with the advent of CSS.
  4. Their functionality was strictly additive, and removing or minimally stubbing them was sufficient. Ruby elements other than <ruby> and charset on <script> are two examples of this.
  5. <blink> is unique in that it not only rendered any text in it unreadable, it also presented a potential safety risk to users with certain disabilities like epilepsy. But this is the only such case I'm aware of of this rationale.

What's being described sounds like a layer on top of the current ecosystem too - in which it most certainly would increase resource usage not reduce it, especially in many cases where a simple object with a getter/setter/subscription model would suffice as well as offer the flexibility of asynchronous updates.

Mouse location is currently not directly queryable. A hypothetical pointerState = document.primaryPointer.get() signal could return that information right away, and inside a watched signal, it could then be registered for updates.

Such a signal would simplify and accelerate a couple needs that currently rely extensively on highly advanced use cases for pointer events:

  • First-person games and visualizations using mouse controls need the current mouse position, but they do not need prior mouse updates. All they really need is to read pointer.x and pointer.y in the main game loop so they know what angle to render the camera at in the scene.
  • Element drag and drop can simply read the pointer at the time of pointer down and set elem.style.transform = `translate(${start.x + pointer.x - click.x}, ${start.y + pointer.y - click.y})` on every frame until pointer up. It's far simpler than the current drag and drop API.

@jlandrum
Copy link

@dead-claudia new properties would be such a massive undertaking given that when such events would need to be triggered varies so much. Plus they would absolutely need to be lazy to avoid the issue of overhead caused by unsubscribed signals.

As for mouse position, there's no need to poll for it; the event gets fired every time it changes and doesn't get fired if it doesn't change. Yes - the event system is very inefficient given the amount of Event instances that get created but that's a separate issue entirely - and you can't just make it a signal, there would be a reasonable amount of pushback to have to call get() when you want this information without using signals.

@dead-claudia
Copy link
Contributor

@dead-claudia new properties would be such a massive undertaking given that when such events would need to be triggered varies so much. Plus they would absolutely need to be lazy to avoid the issue of overhead caused by unsubscribed signals.

The subscription would obviously be lazy, and signals as proposed today already have such semantics. The .get() would be eager, and read state the browser already needs to be able to efficiently access from the main JS thread anyways.

As for mouse position, there's no need to poll for it; the event gets fired every time it changes and doesn't get fired if it doesn't change. Yes - the event system is very inefficient given the amount of Event instances that get created but that's a separate issue entirely - and you can't just make it a signal, there would be a reasonable amount of pushback to have to call get() when you want this information without using signals.

Evidence? My experience differs on this one.

  • WebGPU's API surface is gigantic, yet implementors embraced it both on and off the web because it aligned fairly closely with Vulkan.
  • The Reporting API hooks into a ton of things, yet every browser vendor was happy to oblige with little pushback.
  • Mutation observers provide a significantly larger API footprint than their legacy counterparts, and yet browser vendors specifically pushed for it because of perf issues with the old API.
  • IndexedDB was directly conceived as a replacement for both WebSQL and (to an extent) localStorage. The API is gigantic, but the internals of what it replaced were even heavier, despite having far fewer methods.

@jlandrum
Copy link

@dead-claudia Why would WebGPU have influenced off web APIs? They're completely different use cases and environments. Is it possible individuals have applied WebGPU semantics to their projects? Absolutely - but this furthers my point of signals being best made possible through changes to ECMAScript opposed to putting them in directly, it should be up to those implementing their flavor of signals to implement them.

Reporting API isn't something that changes how developers would interact with things, I'm not sure what the point of mentioning it is.

Mutation observers are a questionable feature that were the result of being a product of their time; they likely wouldn't exist as they do if we had many of the alternatives we could have with modern language capabilities.

WebSQL wasn't standardized for a reason, but it too lives on it's own and doesn't exist within the DOM or other APIs unless explicitly used.

@dead-claudia
Copy link
Contributor

@jlandrum Responding in parts.

Reporting API isn't something that changes how developers would interact with things, I'm not sure what the point of mentioning it is.

The API surface is small, but it involves a lot of work on browsers' part.

Similarly, a theoretical changedSignal = elem.attr("changed"), while having a small API surface, would require significant changes in how attributes work, if browsers want to make it more efficient than essentially a mutation observer wrapper.

Mutation observers are a questionable feature that were the result of being a product of their time; they likely wouldn't exist as they do if we had many of the alternatives we could have with modern language capabilities.

Mutation observers serve important roles in page auditing, and they enable userscripts and extensions to perform types of page augmentations they wouldn't be able to do otherwise.

WebSQL wasn't standardized for a reason, but it too lives on it's own and doesn't exist within the DOM or other APIs unless explicitly used.

But IndexedDB was. And my point for bringing that (and WebGPU) up is to raise awareness that browsers don't shy away from highly complex APIs as long as they actually bring something to the table.

Anyways, this is starting to get off-topic, and I'm starting to get the impression confirmation bias may be getting in the way of meaningful discussion here. So consider me checked out of this particular discussion. 🙂

@jlandrum

This comment has been minimized.

@ctcpip
Copy link
Member

ctcpip commented Nov 13, 2024

Gentle reminder that all participants are expected to abide by the TC39 Code of Conduct. In summary:

  • Be respectful
  • Be friendly and patient
  • Be inclusive
  • Be considerate
  • Be careful in the words that you choose
  • When we disagree, try to understand why

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants