Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add future support #38

Open
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

andrewgazelka
Copy link

No description provided.

@andrewgazelka andrewgazelka requested a review from a team as a code owner February 8, 2022 02:34
@andrewgazelka
Copy link
Author

Also—can someone verify the MSRV? I believe it should be 1.36 because this is when Future was stabilized. However, I might be wrong. I can't test on my M1 Mac.

@andrewgazelka
Copy link
Author

forgot to update the GitHub workflow. Fixed.

@andrewgazelka
Copy link
Author

removed pin_project because it required using a higher MSRV

Copy link
Member

@eldruin eldruin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is interesting, thank you!
In the grand scheme of things, we have thought of deprecating this crate once everything that embedded-hal-async needs is stable and we can merge it into embedded-hal, featuring traits which actually return futures and so on.
However, I would be open to adding Future support to nb if it provides some value, at least in the mean time, which can be long, or eases the migration.
Could you add documentation detailing how this PR would work for a driver and MCU HAL implementation, similar to what is already in place?

README.md Outdated Show resolved Hide resolved
impl<Ok, Err, Gen: FnMut() -> Result<Ok, Err>> Future for NbFuture<Ok, Err, Gen> {
type Output = core::result::Result<Ok, Err>;

fn poll(self: Pin<&mut Self>, _cx: &mut Context<'_>) -> Poll<Self::Output> {
Copy link

@ryankurte ryankurte Feb 25, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hey thanks for the PR! given that the waker isn't getting propagated here i wouldn't expect the scheduler to poll this again, and always hitting wake runs into performance problems on non-embedded platforms... does this behave as expected / how're you using this at the moment?

(cc. @Dirbaio the futures-whisper)

Copy link
Member

@Dirbaio Dirbaio Feb 25, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it'll hang with most executors out there due to not waking the waker. Docs on Future::poll() say supporting wakers is mandatory.

It'll only work with very dumb executors that do loop { fut.poll(cx) } ignoring the waker, spinning the CPU at 100%.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's one way around it, which is immediately waking ourselves on poll: cx.waker().wake_by_ref(). It'll still spin the CPU at 100% but at least it won't hang. In theory this might cause other tasks to get starved of CPU, but in practice executors usually handle a "self-wake" like this by moving the task to the back of the queue, so other tasks still get to run. I don't think it's guaranteed though.

IMHO the way forward is HALs implementing the async traits, and deprecate nb. This allows HALs to wake the wakers from irqs, avoiding spinning the cpu at 100%. It also allows using DMA (you can't use DMA soundly with nb).

There's another issue: the way nb is currently used, the task is polled for each byte. This makes it too easy to lose data if the task isn't polled fast enough, as the hardware buffers are usually tiny. If HALs impl the futures instead, they have powerful tools to avoid that (irqs, dma).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants