Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dancing between state and effects - a real-world use case #15240

Open
jlongster opened this issue Mar 28, 2019 · 31 comments
Open

Dancing between state and effects - a real-world use case #15240

jlongster opened this issue Mar 28, 2019 · 31 comments

Comments

@jlongster
Copy link
Contributor

jlongster commented Mar 28, 2019

I started this as a gist but Dan mentioned that this would be a good discussion issue so here goes. I've been writing with and refactoring code into hooks for a while now. For 95% of code, they are great and very straight-forward once you get the basic idea. There are still a few more complex cases where I struggle with the right answer though. This is an attempt to explain them.

The use case

This is a real-world use case from an app I'm building: interacting with a list items. I've simplified the examples into codesandboxes though to illustrate the basic idea.

Here's the first one: https://codesandbox.io/s/lx55q0v3qz. It renders a list of items, and if you click on any of them, an editable input will appear to edit it (it doesn't save yet). The colored box on the right will change whenever an item rerenders.

If you click around in the items, you can see that when changing the edited item, all items rerender. But the Row component is wrapped with React.memo! They all rerender because the onEdit is new each time the app renders, causing all items to rerender.

Maintaining callback identity

We want onEdit to be same function for all future renders. In this case, it's easy because it doesn't depend on anything. We can simply wrap it in useCallback with an empty dependencies array:

  let onEdit = useCallback(id => {
    setEditingId(id);
  }, []);

Now, you can see clicking around only rerenders the necessary items (only those colors change): https://codesandbox.io/s/k33klz68yr

Implementing saving

We're missing a crucial feature: after editing an item, on blur it should save the value. In my app the way the "save" event gets triggered is different, but doing it on blur is fine here.

To do this, we create an onSave callback in the app and pass it down to each item, which calls it on blur with the new value. onSave takes a new item and updates the items array with the new item and sets the items state.

Here is it running: https://codesandbox.io/s/yvl79qj5vj

You'll notice that all items are rerendering again when saving. The list rerenders twice when you click another item: first when you click down and the input loses focus, and then again to edit a different item. So all the colors change once, and then only the editing rows color changes again.

The reason all of them are rerendering is because onSave is now a new callback every render. But we can't fix it with the same technique as onEdit because it depends on items - so we have to create a new callback which closes over items otherwise you'd lose previous edits. This is the "callbacks are recreated too many times" problem with hooks.

One solution is to switch to useReducer. Here's that implementation:
https://codesandbox.io/s/nrq5y77kj0

Note that I still wrap up the reducer into onEdit and onSave callbacks that are passed down to the row. I find passing callbacks to be clearer in most cases, and works with any components in the ecosystem that already expect callbacks. We can simply use useCallback with no dependencies though since dispatch is always the same.

Note how that even when saving an item, only the necessary rows rerender.

The difference between event handlers and dispatch

There's a problem though. This works with a simple demo, but in my real app onSave both optimistically updates local state and saves it off to the server. It does a side effect.

It's something like this:

async function onSave(transaction) {
  let { changes, newTransactions } = updateTransaction(transactions, transaction);
  // optimistic update
  setTransactions(newTransactions)
  // save to server
  await postToServer('apply-changes', { changes })
}

There's a big difference between the phase when an event handler and dispatch is run. Event handlers are run whenever they are triggered (naturally) but the dispatching the action (running reducer) happens when rendering. The reducer must be pure because of this.

Here's the reducer from https://codesandbox.io/s/nrq5y77kj0:

  function reducer(state, action) {
    switch (action.type) {
      case "save-item": {
        let { item } = action;
        return {
          ...state,
          items: items.map(it => (it.id === item.id ? item : it))
        };
      }
      case "edit-item": {
        return { ...state, editingId: action.id };
      }
    }
  }

How is save-item also supposed to trigger a side effect? First, item's important to understand these 3 phases:

Event handler -> render -> commit

Events are run in the first phase, which causes a render (when dispatches happen), and when everything is finally ready to be flushed to the DOM it does it in a "commit" phase, which is when all effects are run (more or less).

We need our side effect to run in the commit phase.

Option 1

One option is to use a ref to "mark" the saving effect to be run. Here's the code: https://codesandbox.io/s/m5xrrm4ym8

Basically you create a flag as a ref:

let shouldSave = useRef(false);

Luckily, we've already wrapped the save dispatch into an event handler. Inside onSave we mark this flag as true. We can't do it inside of the reducer because it must be pure:

  let onSave = useCallback(item => {
    shouldSave.current = true;
    dispatch({ type: "save-item", item });
  }, []);

Finally, we define an effect that always runs after rendering and checks the flag and resets it:

  useEffect(() => {
    if (shouldSave.current) {
      // save... all the items to the server?
      post(items)
      shouldSave.current = false;
    }
  });

I thought this option was going to work, but just ran into this issue. We don't know what to save anymore. We certainly don't want to send the entire items array to the server! I suppose we could store the item in the ref, but what happens if multiple events are fired before the effect runs? I suppose you could store an array, but... do we really need that?

Option 2

Note: I just noticed this option is documented in How to read an often-changing value from useCallback?, but I disagree with the tone used. I think this is a fine pattern an better in many cases than dispatch, even if it's not quite as robust. Especially since it's not as powerful as callbacks. (see end of this section)

Keeping around all of the data we need to do the effect might work in some cases, but it feels a little clunky. If we could "queue up" effect from the reducer, that would work, but we can't do that. Instead, another option is to embrace callbacks.

Going back to the version which used a naive onSave which forced all items to rerender (https://codesandbox.io/s/yvl79qj5vj), onSave looks like this:

  let onSave = useCallback(
    item => {
      setItems(items.map(it => (it.id === item.id ? item : it)));
    },
    [items]
  );

The core problem is that it depends on items. We need to recreate onSave because it closes over items. But what if it didn't close over it? Instead, let's create a ref:

let latestItems = useRef(items);

And an effect which keeps it up-to-date with items:

useEffect(() => {
  latestItems.current = items
});

Now, the onSave callback can reference the ref to always get the up-to-date items. Which means we can memoize it with useCallback:

let onSave = useCallback(item => {
  setItems(latestItems.current.map(it => (it.id === item.id ? item : it)));
}, []);

We are intentionally opting to always referencing the latest item. The biggest change with hooks in my opinion is that they are safe by default: an async function will always reference the exact same state that existed at the time they were called. Classes operate the other way: you access state from this.state which can be mutated between async work. Sometimes you want that though so you can maintain callback identity.

Here is the running sandbox for it: https://codesandbox.io/s/0129jop840. Notice how you can edit items and only the necessary rows rerender, even though it updates items. Now, we can do anything we want in our callback, like posting to a server:

let onSave = useCallback(item => {
  setItems(latestItems.current.map(it => (it.id === item.id ? item : it)));
  // save to server
  post('/save-item', { item })
}, []);

Basically, if all you need is the latest data since last commit, callbacks can be memoized as well as reducers. The drawback is that you need to put each piece of data you need in a ref. If you have lots of pieces of data and only a few simple effects, reducers would be better, but in my case (and I suspect in many others) it's easier to use callbacks with refs.

It's nice too because in my real app the save process is more complicated. It needs to get changes back from the server and apply them locally as well, so it looks more like this:

let onSave = useCallback(item => {
  setItems(latestItems.current.map(it => (it.id === item.id ? item : it)));
  // save to server
  let changes = await post('/save-item', { item })
  applyChanges(latestItems.current, changes)
}, []);

Maintainability-wise, it's really nice to see this whole flow here in one place. Breakin this up to try to manually queue up effects and do a dance with useReducer feels much more convoluted.

Option 3

I suppose another option would be to try to "mark" the effect to be run in state itself. That way you could do it in useReducer as well, and it would look something like this:

function reducer(state, action) {
  switch (action.type) {
    case "save-item": {
      let { item } = action;
      return {
        ...state,
        items: state.items.map(it => (it.id === item.id ? item : it)),
        itemsToSave: itemsToSave.concat([item])
      };
    }
    // ...
  }
}

And an effect would check the itemsToSave state and save them off. The problem is resetting that state - the effect would have to change state, causing a useless rerender, and it's not determistic to make sure that the effect does not run multiple times before itemsToSave gets reset.

In my experience mixing effects into state, causing renders, make things a lot more difficult to maintain and debug.

What's the difference between Option 1 and 2?

Is there a crucial difference between 1 and 2? Yes, but I'd argue it's not a big deal if you can accept it. Remember these three phases:

Event handler -> render -> commit

The big difference is option 1 is doing the side effect in the commit phase, and option 2 is doing it in the event handler phase. Why does this matter?

If, for some reason, an item called onSave multiple times before the next commit phase happened, option 1 is more robust. A reducer will "queue up" the actions and run them in order, updating state in between them, so if you did:

onSave({ id: 1, name: "Foo" })
onSave({ id: 2, name: "Bar" })

which runs the callback twice immediately, the reducer will process the first save and update the items, and process the second save passing in the already updated state.

However, with option 2, when processing the second save the commit phase hasn't been run yet so the latestItems ref hasn't been updated yet. The first save will be lost.

However, the ergonomics of option 2 is much better for many use cases, and I think it's fine to weight these benefits and never need the ability to handle such quick updates. Although concurrent mode might introduce some interesting arguments against that.

Another small use case for triggering effects

In case this wasn't already long enough, there's a similar use case I'll describe quickly. You can also add new items to the list by editing data in an empty row, and the state of this "new item" is tracked separately. "Saving" this item doesn't touch the backend, but simply updates the local state, and separate explicit "add" action is needed to add it to the list.

The hard part is that there is a keybinding for adding the item to the list while editing - something like alt+enter. The problem is I want to coordinate with the state change, so first I want to save the existing field and then add to the list. The saving process is complicated so it need to run through that first (I can't just duplicate it all in onAdd).

This isn't a problem specific to hooks, it's just about coordinating with state changes. When I was working with reducers, I had though that something like this would be neat. Basically when the new items detect that you want to save + add it first an action like { type: 'save-item', fields: { name: 'Foo' }, shouldAdd: true }

function reducer(state, action) {
  switch (action.type) {
    case "save-item": {
      let { fields } = action;
      let newItem = { ...state.newItem, ...fields };

      if(action.shouldAdd) {
        shouldAdd.current = true
      }

      return { ...state, newItem };
    }
    // ...
  }
}

where shouldAdd is a ref that is checked on commit phase and saves the item off to the server. This isn't possible though.

Another option would be for the item to call onAdd instead of onSave when saving + adding, and you could manually call the reducer yourself to process the changes:

async function onAdd(fields) {
  let action = { type: 'save-item', fields }
  dispatch(action)
  
  let newItem = reducer(state, action)
  post('/add-item', { newItem });

}

This is kind of a neat trick: you are manually running the reducer to get the updated state, and React will run the reducer again whenever it wants.

Since I ended up liking callbacks for my original problems, I ended up going with a similar approach where I have a ref flag that I just set in onSave:

let [newItem, setNewItem] = useState({})
let latestNewItem = useRef(newItem);
let shouldAdd = useRef(false);

useEffect(() => {
  latestNewItem.current = newItem;
})

useEffect(() => {
  if(shouldAdd.current) {
    setNewItem({})
    post('/add-item', { newItem })
    shouldAdd.current = false;
  }
})

let onSave = useCallback((fields, { add }) => {
  // In my real app, applying the changes to the current item is a bit more complicated than this,
  // so it's not an option to separate on an `onAdd` function that duplicates this logic
  setNewItem({ ...latestNewItem.current, ...fields });

  // This action also should add, mark the effect to be run
  if(add) {
    shouldAdd.current = true;
  }
}, [])

Conclusions

Sorry for the length of this. I figure I'd be over-detailed rather than under-detailed, and I've been brewing these thoughts since hooks came out. I'll try to conclude my thoughts here:

  • Effects are very nice. It feels like we have easy access to the "commit" phase of React, whereas previously it was all in componentDidUpdate and not composable at all. Now it's super easy to throw on code to the commit phase which makes coordinating stuff with state easier.

  • Reducers have interesting properties, and I can see how they are fully robust in a concurrent world, but for many cases they are too limited. The ergonomics of implementing many effect-ful workflows with them requires an awkward dance, kind of like when you try to track effect states in local state and split up workflows. Keeping a linear workflow in a callback is not only nice, but necessary in many cases for maintainability.

  • Callbacks can be made memoizable without much work. In many cases I think it's easier to use the ref trick than reducers, but the question is: just how dangerous is it? Right now it's not that dangerous, but maybe concurrent mode really is going to break it.

  • If that's the case, we should figure out a better way to weave together effects and state changes.

I hope all of this made sense. Let me know if something is unclear and I'll try to fix it.

@Nathan-Fenner
Copy link

What do you think of this approach (codesandbox), based on your "Option 2" implementation? It has the following characteristics:

  • unnecessary rerenders don't occur
  • there is no useRef
  • there is no useEffect

The main reason that this is possible is that the onSave function knows which item is currently being edited (editingId can be closed over). Therefore, we can write:

let onSave = useCallback(
  item => {
    setItems(items.map(it => (it.id === editingId ? item : it)));

    // start any async stuff here
  },
  [items, editingId]
);

The main problem with this (that you probably noticed) is that onSave changes identity whenever editingId changes, which means that if you pass it to every <Row/>, then every <Row/> re-renders whenever you start editing a different one (which is bad!).

There are two ways to fix this:

  • make sure the identity of onSave stays the same across all renders. This is what your solution does (using refs to pass state to onSave)
  • don't pass it to every Row

The second approach is what I took. The idea is straightforward: the only <Row/> that could possible call onSave() is the one that's currently being edited. Therefore, we only need to pass onSave to that row (you could also get rid of editing and simply check for whether onSave is truthy, but I thought that was a bit much).

Any async logic can go after the setItems call. If you need to access the "current" items you're unfortunately out of luck, but it doesn't seem immediately obvious to me that that would be necessary. (For example, since you don't want to send the whole array to the server, your server probably supports updating individual elements without needing to worry about concurrent/older updates to the others).

It's possible that this example is missing some of the complexity that prevents you from using this approach (or a similar approach). In that case, I think it would be helpful to figure out what those details are. They should give a better idea of the "right" path forward for supporting the cases that useState / useCallback / memo are insufficient for.

@jlongster
Copy link
Contributor Author

(This will be a quick reply, I gotta run soon) I had thought of that but forgot to mention in my post. It is an option if all you need is onSave, but in my app I have several other callbacks, one of them being onDelete. You can click an X beside any item to delete it, so they always need onDelete since any of them can be deleted at any time (also via special keyboard shortcuts so checking hover or something wouldn't work). It doesn't really scale to guess when they "need" the callbacks.

@behzad888
Copy link
Contributor

@Nathan-Fenner
Here is the running sandbox for more complexity https://codesandbox.io/s/2pqwl14ywy

@JakeCoxon
Copy link

JakeCoxon commented Mar 29, 2019

With option two you can actually save the ref of the callback instead of the individual states. So the callback always refers to all latest states and the ergonomics is slightly nicer. Here I introduced useStableCallback - https://codesandbox.io/s/k0zqrkrojo

It still suffers from the problem of losing the first save if you call it twice, but I reckon you could combine this and useReducer

@KaseyPowers
Copy link

Are there any drawbacks to functional updates for state?
I tried this change to the early Option 2 example and it seems to work as expected.
https://codesandbox.io/embed/yqoj8zyrnz

let onSave = useCallback(
  item => {
    setItems(currentItems => currentItems.map(it => (it.id === item.id ? item : it)));
  },
  []
);

@jlongster
Copy link
Contributor Author

With option two you can actually save the ref of the callback instead of the individual states.

That's true, I think I remember seeing this now. It's just as valid a solution as option 2. The only difference I can think of is the behavior when you are doing async work in the callback.

Later in my post I mentioned that my real callback looks something like this:

let onSave = useCallback(item => {
  setItems(latestItems.current.map(it => (it.id === item.id ? item : it)));
  // save to server
  let changes = await post('/save-item', { item })
  setItems(applyChanges(latestItems.current, changes))
}, []);

I need to act on the result from the server and apply changes to local state from it. Think of adding a new item - a common pattern is to assign it a temporary id, and the server rewrites the id to a real one and returns the real id to you, which you need to update locally.

My code will reference the latest items, whereas as callback from useStableCallback will always reference the same items. You might overwrite updates that happened during async work.

However, you can solve this by using functional updates:

  let onSave = useStableCallback(item => {
    setItems(items => items.map(it => (it.id === item.id ? item : it)));
    let changes = await post(...)
    setItems(items => applyChanges(changes, items))
  });

This also has the benefit of never overwriting updates, even if they fired off sequentially. You get the same behavior as dispatching multiple actions at once with a reducer.

I'm curious if the React team (or anybody) sees anything wrong with the above solution. If not, it seems like something like useStableCallback could be a builtin hook, or useCallback should behavior more like it instead.

@jlongster
Copy link
Contributor Author

Are there any drawbacks to functional updates for state?

The problem isn't updating state, the options are trying to solve this problem: "We need our side effect to run in the commit phase." We need to run side effects in addition to updating the state and make sure they stay in sync.

As mentioned above functional updates do help with not losing updates but doesn't help with effects.

@jlongster
Copy link
Contributor Author

If you're already using reducers, the answer is relatively straight-forward with the same technique, and you don't need useStableCallback:

  let onSave = useCallback(item => {
    dispatch({ type: 'save-item', item })
    let changes = await post('/save-item', { item })
    dispatch({ type: 'apply-updates', changes })
  }, []);

For my use case I think driving side effects via callbacks works well. Not sure if there is another strong use case for driving them via state changes themselves (meaning the ability to emit effects from reducers).

@jlongster
Copy link
Contributor Author

jlongster commented Mar 29, 2019

Actually, there is a problem (thinking out loud here, sorry for the multiple posts). Just remembered that in my real app, saving is actually like this:

function onSave(item) {
  setItems(items.map(it => (it.id === item.id ? item : it)));

  let diff = getDiff(items.find(it => it.id === item.id), item)
  let changes = await post('/save-item', { item })
  setItems(applyChanges(changes, items))
}

It diffs the new item with the existing one and only sends off changed fields to the server. So it needs access to the current items, so the functional updates aren't enough. Option 2 or useStableCallback still work if you need this kind of access, but it's not 100% async safe. Will think more on it.

What could happen is onSave is fired multiple times before items is updated, so the second time it diffs it generates a diff that could revert the first change (because item is updated but the item in items in not updated yet).

If reducers has a way to emit events, the advantage is you get a way to always be in sync with the existing state and have full access to it, knowing that it's always up-to-date. Not sure if that's a good route to go though.

@jlongster
Copy link
Contributor Author

jlongster commented Mar 29, 2019

Also see the section "Another small use case for triggering effects". I need access to the current state to derive a new value and then fire an effect. It's a clearer use case for the need to wait for a state change to happen to then do something.

@pelotom
Copy link

pelotom commented Mar 29, 2019

Just on the point of maintaining callback identity for synchronous state modifiers, I think my library use-methods solves this very nicely. Here's what your useReducer version looks like after conversion to useMethods:

const methods = state => ({
  saveItem(item) {
    Object.assign(state.items.find(it => it.id === item.id), item);
  },
  editItem(id) {
    state.editingId = id;
  }
});

const initialState = {
  items: itemData,
  editingId: null
};

let [state, { saveItem, editItem }] = useMethods(methods, initialState);

(full working version)

Note how saveItem and editItem just fall out as stable callbacks, no need for useCallback.

As for the best way to handle the async/sync interplay aspect, it's a great question and I'm glad you're asking it!

@KidkArolis
Copy link

KidkArolis commented Mar 29, 2019

I've been working on a library called tiny-atom for managing state/effects in React. It's implementation and usage became nicer with hooks. I wanted to share it here since I think it does solve your specific problem, and maybe it can be useful when thinking about other, possibly "closer to the metal" solutions. Also, just looking for feedback and for people to poke holes.

The goal with tiny-atom was always collocating state/effect logic, the way you're describing it in Option 2. Normally, tiny-atom was all about global state management (ala redux), but I've been recently thinking about adding a useLocalAtom/useInlineAtom hook, and your use case shows it could really be useful.

Example: https://codesandbox.io/s/71lrr5rwq
Basically, you'd just need this code to achieve your scenario:

  const [state, { onEdit, onSave }] = useLocalAtom(() => ({
    state: {
      items: itemData,
      editingId: null
    },
    onEdit({ get, set }, editingId) {
      set({ editingId });
    },
    async onSave({ get, set }, item) {
      set({
        items: get().items.map(it => (it.id === item.id ? item : it)),
        editingId: null
      });
      await new Promise(resolve => setTimeout(resolve, 1000));
      item = { ...item, name: item.name + " simulating server update" };
      set({ items: get().items.map(it => (it.id === item.id ? item : it)) });
    }
  }));

It's a bit like using useReducer, but with async actions. Instead of synchronously returning updated state you use synchronous but dynamic get()/set() calls that ensure you're always manipulating up to date state in between your async effects. There's some parallel there between tiny-atom's actions and dancing with React's state/reducers/effects/refs.

The implementation of the library itself is fairly small:
https://github.com/KidkArolis/tiny-atom/blob/master/src/index.js
https://github.com/KidkArolis/tiny-atom/blob/master/src/react/hooks.js

I do think there are potentially some issues with concurrent mode in the way my hooks are implemented, but there isn't much information about concurrent mode, so I haven't been worrying too much about it.

@ntkoopman
Copy link

But we can't fix it with the same technique as onEdit because it depends on items - so we have to create a new callback which closes over items otherwise you'd lose previous edits.

This isn't really true, as you can get the items by passing a function to setItems. You could trigger side effects from there as well and it has the benefit of not relying on mutation.

let onSave = useCallback(
  item => {
    setItems(items => items.map(it => (it.id === item.id ? item : it)));
  },
  []
);

@jlongster
Copy link
Contributor Author

@ntkoopman You cannot perform side effects in that callback, it must be a pure function that returns new state. The thing we are missing is the second callback that setState has so you can do: this.setState(state => newState, () => triggerSideEffect()). Hooks don't provide the second callback because it will be closed over old state/props when it's called in the commit phase, so the goal of this issue is to explore alternative solutions.

@urugator
Copy link

urugator commented Mar 30, 2019

I think this is actually quite old (and yet not fully solved) problem.

The gist is:

  • the side effect needs an access to current state (eg. to trigger conditionally or to use a value)
  • the current state is only available inside setState/reducer, which can't contain side effects
  • you can't access current state safely on other places, due to it's async nature

Now, before continuing, here is one important thing to realize:
You can't safely access a state of two components at the same time.
Only way to safely pass the state from one component to another is via props and to do that a component update is required.
As a consequence you either have to push the state up or you will end up with this weird synchronization dance, causing multiple updates.
Afaik there is no way around that and it's a direct consequence of one way data flow.
Basically you have to decide between centralized state or synchronization of multiple local isolated states with more updates.

Ok, lets say we are willing to push the state up.
Now you can pass setState or dispatch down. These are safe to use, but as mentioned how do you run side effects?
In action creator?
Ok, but how do you pass the current state to the action creator?
Via props?
Then you will cause render anyway and there is no point in passing static setState/dispatch.
And how to chain another effect, which again requires fresh state?
By moving it to effect/componentDidUpdate?
Good luck with pairing these effects with original invocation contexts, I really tried this "state machine" approach and it's hard to impossible to manage.

Therefore we need something more powerful with an access to state:
redux-thunk has getState().
In classes we can access this.state.
In hooks it's possible to mimick this with useRef "hack":

const [state, setState] = React.useState(initialState);
const stateRef = React.useRef(state);
stateRef.current = state;

const dispatch = React.useCallback(fn => {
  ReactDom.unstable_batchedUpdates(() => {
    fn({ state: stateRef.current, setState, xhr, dispatch });
  });
}, [])

// Usage
dispatch(({ state, setState, dispatch }) => { /* shoot yourself in the foot */ });

But in all these cases the usage is unidiomatic, tricky and often dangerous (less dangerous in redux, because the state is synchronous and isolated).
Basically you have to follow these rules:

  • you must NOT call multiple dispatch() synchronously (in sequence or one from another)
  • the only place where you can safely access current state is at the beggining of dispatch, before any state mutation is performed
  • you can chain dispatch at the end of effect, but you must be sure it's done after the state is commited (there isn't anything like waitForUpdate, so we assume that dispatch is safe if it's deferred)
  • you can't use async/await
dispatch(({ state, setState, dispatch }) => {    
  // Extract required state here
  const {    
    saving,
    type,
  } = state;
  
  // Bail out based on current state 
  if (saving) return; 
  
  // Update state
  setState(state => ({
    ...state,
    saving: true,
  }));    
  
  // "state" is obsolete from this point
  
  // Trigger effect
  xhr.post('/api')
    .send({      
      type, // use current state
    })
    .then(() => {
       // Chain another dispatch (it has an access to fresh state)       
       dispatch(action)        
    })
})

It's quite hard to come up with API which has the same power, is safe and is easy to use at the same time.

I recommend this blogpost, there is a ton of interesting references.

@KidkArolis
Copy link

@urugator why not dispatch({ getState, setState, dispatch }) instead of dispatch({ state, setState, dispatch }) as described in the comment above.

@urugator
Copy link

urugator commented Mar 30, 2019

@KidkArolis Well yes, it allows you to obtain fresh state inbetween the async calls without the need to dispatch, but it doesn't solve the safety problem - you can still access outdated state.
I intentionally don't use the getter because:

  • it may give a false impression, that it can be safely called to retrieve the most current state at any time, which isn't true (while state is known to be immutable)
  • it's similar to getState in redux-thunk, where it actually returns fresh state every time, because state updates are synchronous (afaik it's not ensured by API though)
  • it forces you to call dispatch - dispatch is sort of an entry point/gate to your application - when the server response arrives or an event occurs you want it to go trough this gate, so you can introduce batching/logging etc (it delimits a transactional context in a sense) EDIT: or you may want to use reducers instead of imperative approach
  • basically the idea is to enforce a flow like: external event (network/dom event) -> dispatch -> collect current state -> imperative logic without setState (like bailout) -> update state -> effect -> repeat

Just to clarify. I don't suggest it as a solution. I am just confirming that the problem exists, the use cases are real (and common actually), that it's not a hook specific problem and that there isn't a simple bulletproof solution (yet).

@ntkoopman
Copy link

@ntkoopman You cannot perform side effects in that callback, it must be a pure function that returns new state.

Why? The documentation does not mention this.

@urugator
Copy link

urugator commented Mar 31, 2019

@ntkoopman

You could trigger side effects from there

  1. It's non-idiomatic and confusing to read. The things don't occur in the order of their presence in the function.
setState(state => {  
  if (state.something) { 
    xhr.post()    
    .then(() => {       
      // we are in future
      setState(evenNewerState => {})         
    })
  }      
    
  // back to past  
  return newState;
})
  1. You have to make sure that side effect is deferred, otherwise if it calls setState (synchronously) react will throw that you're calling setState from within setState.
    This can't happen with setState(null, () => setState()), because the callback waits for update.
    EDIT: Also (if I am not mistaken), you can be sure that the DOM is already updated inside the callback.
    However this.setState(updater, onUpdate) is also far from ideal, because it still depends on this.state, which must be accessed cautiously and it doesn't compose very well.

@ntkoopman
Copy link

It becomes idiomatic if you use it enough. I'm not particularly defending this, but it doesn't seem very difficult to understand to me.

async function commitChanges(previous, item) {
  let diff = getDiff(previous, item);
  let changes = await post('/save-item', diff);
  setItems(items => applyChanges(changes, items));
}

function onSave(item) {
  setItems(items => {
    let previous = items.find(it => it.id === item.id);
    commitChanges(previous, item);
    return items.map(it => (it == previous ? item : it));
  })
}

The second one is not really a problem. If you are calling setState synchronously from setState you can refactor the code. Since setState is local to your component this should not be too hard.

@urugator
Copy link

urugator commented Apr 1, 2019

It becomes idiomatic

I don't think it will. The updater function is defined as a synchronous function which takes state and returns a new one. Which makes it easy to reason about, compose, test etc.
Most of us are used to read the code from top to bottom, expecting causality in this direction.
Error handling is problematic, because the async context is detached from original function.
I think this is known as releasing Zalgo in async programming.

Since setState is local to your component this should not be too hard.

If you move state up, you also has to move all these functions up, so that they have an access to state/setState.
For an app state located in a root component, this means having hundreds of methods located in a single component.
So you come up with the dispatch approach, which allows you decouple these methods from the component.
But then you need some guarantees about this dispatch function, like that it can safely access state and mutate it (like you have these guarantees in setState(null, () => { /*here*/ }) and componentDidUpdate).
You shouldn't need to worry whether the dispatched function runs deferred or not, no matter what it does.
Secondly, deferring the call is working, but kinda naive aprroach, as it's unaware of react's lifecycle.
There is no semantical guarantee, that the state (or dom) accessed in deferred call is safe to use, because in theory it can be in any state (not commited yet).

@wmaurer
Copy link

wmaurer commented Apr 2, 2019

@jlongster, you wrote:

What could happen is onSave is fired multiple times before items is updated, so the second time it diffs it generates a diff that could revert the first change (because item is updated but the item in items in not updated yet).

You also mentioned that the the changes to the saved item could come from the server.

Here it sounds like the posts to the API must be done sequentially, i.e. a second post should not be allowed until after the first post has completed. If this is the case, then maybe some other method of effect coordination is required instead of component lifecycle?

@jlongster
Copy link
Contributor Author

@wmaurer In my scenario, the server is guaranteed to reply in the order that requests come in. So it's totally fine for a second request to fire off before the first one completes. It's safe to always fire a save whenever the state of a field changes. (And if it weren't, this is still a good use case in my opinion as this is a simpler scenario which others will want to do)

Changes from the server only propagate some internal fields. It keeps the local data fully consistent with the server which is only important if the user does some advanced stuff after editing specific fields. It's fine for this to update whenever it comes back from the server, regardless of what the user is doing.

@wmaurer
Copy link

wmaurer commented Apr 5, 2019

@jlongster thanks for your explanation

Also, many thanks for this issue and your detailed explanation of the challenges involved with this problem. I've been 'following' React for a while but never did anything with it until a few weeks ago, and I started with hooks straight away.

I find this problem intriguing because pretty much every web app I write has at least a few components with a similar level of complexity in the 'dancing between state and effects'.

I think I've managed an implementation that satisfies the requirements you've documented. Whether one finds it palatable depends on whether one likes RxJS or not ;-)

Version 1:
https://codesandbox.io/s/github/wmaurer/dancing-state-effects-1/tree/master/

Version 2, with a sprinkling of @pelotom's unionize and @gcanti's fp-ts, just because they're part of my every-day toolbox:
https://codesandbox.io/s/github/wmaurer/dancing-state-effects-2/tree/master/

I'm still very much interested to see an elegant implementation using 'standard' React hooks.

@pelotom
Copy link

pelotom commented Apr 6, 2019

@urugator for the problem of reading the state after an action has been dispatched, you could define a useNext hook which gives you an async function for obtaining the next value:

const nextState = useNext(state);

useEffect(() => {
  fetchStuff(state);
}, []);

async function fetchStuff(state) {
  dispatch({ type: 'START_LOADING' });
  
  let data = await xhr.post('/api/data');

  dispatch({ type: 'RECEIVE_DATA', data });
  
  state = await nextState();

  if (!state.needsMoreData) return;

  data = await xhr.post('/api/more-data');

  dispatch({ type: 'RECEIVE_MORE_DATA', data });
}

// ...

function useNext(value) {
  const valueRef = useRef(value);
  const resolvesRef = useRef([]);
  useEffect(() => {
    if (valueRef.current !== value) {
      for (const resolve of resolvesRef.current) {
        resolve(value);
      }
      resolvesRef.current = [];
    }
    valueRef.current = value;
  }, [value]);
  return () =>
    new Promise(resolve => {
      resolvesRef.current.push(resolve);
    });
}

EDIT

useNext has the limitation that it requires the state to change as a result of the dispatch... ideally dispatch itself could return a promise of the subsequent state, whether or not it changes.

@Adrenaline1554
Copy link

@urugator
Copy link

urugator commented Apr 7, 2019

@pelotom

useNext has the limitation that it requires the state to change as a result of the dispatch

Actually you always have to force component update and await state before reading it (even the initial state), otherwise there is no guarantee that it's actually up to date. Consider:

fetchStuff(state); // sets state.fetching = true
fetchStuff(state); // won't see updated state.fetching so fetches anyway

It's obvious here, but once you start decoupling these calls from components and one another, it becomes hard to reason about.

I think it's basically a hook variant of:

function nextState() {
  return new Promise(resolve => setState(null, () => resolve(this.state))),
}

I tried this approach (before hooks) and to me it wasn't generally usable.
There are other problems with deferred (async) calls, it breaks batching, you can't access events in them (react will throw that events are pooled) etc.
There was a discussion whether setState should return Pomise, and if I remember correctly, such promise would actually have to resolve synchronously to make it safe/useful, meaning that technically it wouldn't be a promise and therefore not usable as such.
Another problem was that setState(state) is actually cheaper than setState(state, callback), so setState would become more expensive even if returned promise isn't used (I expect the same problem with dispatch).

My conclusion basically is that you really want to keep everything synchronous to the last possible moment.

@bgerm
Copy link

bgerm commented Apr 28, 2019

I have no idea how this fails with Suspense, but how about something like this: https://codesandbox.io/embed/ry40mp49rq, which builds off of https://twitter.com/DavidKPiano/status/1121700680291553281 to merge effects.

@Dulanjala007
Copy link

Dulanjala007 commented Aug 26, 2019

let onSave = useCallback( item => { setEditingId(editingId => { setItems(items => items.map(it => (it.id === editingId ? item : it))); return editingId }) }, [] );

if you use hook's setState methods like this to access the current state, will it be ok or problematic?
just for current state not related to side effects

Thanks!

@yelouafi
Copy link

yelouafi commented Oct 3, 2019

@jlongster a bit late but that was an interesting read! thanks for sharing!

I'd like to share some personal thoughts

add item use case

I think this code may have some issues

let onSave = useCallback((fields, { add }) => {
  // In my real app, applying the changes to the current item is a bit more complicated than this,
  // so it's not an option to separate on an `onAdd` function that duplicates this logic
  setNewItem({ ...latestNewItem.current, ...fields });

  // This action also should add, mark the effect to be run
  if(add) {
    shouldAdd.current = true;
  }
}, [])

If multiple onSave are 'batched' then the overall effect would be

  • multiple setNewItem updates are coalesced
  • shouldAdd.current = true is idempotent so the effect is the same

But that means state updates will run on a single phase and the same happens for useEffect, the net effect is that the code will issue a post only for the latest added item.

I think the issue reflects the way batching undermines the assumption that every state update will get its corresponding useEffects executed along.

scheduling effects after useReducer

I have some concerns about scheduling effects after reducers (the sequence Event handler -> render -> commit). In Concurrent mode, updates can be reordered by priority, what if a high priority update preempted the above sequence before commit and caused the corresponding Component tree to be discarded (eg low priority Save preempted by a high priority navigation or close)?

If I understand properly, effects aren't executed until after the commit phase, so if we discard some work-in-progress subtree then its scheduled effects are also discarded?? it maybe suitable for effects like data loading put not for things like post

(This may look similar to elm architecture but in elm, state updates always cause scheduled effects to execute regardless of render timing, that's because state update occurs a separate pass prior to vdom reconciliation, in React the 2 operations are interleaved).

Refs

IMO mutability tardoffs are radically different when coupled with concurrency. Especially considering how Concurrent mode works (preemption, batching) I think one should fully understand the consequences using Refs to solve a problem in the app (given many try to use them to get around dependency management)

@gaearon
Copy link
Collaborator

gaearon commented May 4, 2022

We've posted an RFC that is relevant to (at least some of) this discussion.

Would appreciate your input: reactjs/rfcs#220

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests