-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP]Migrate the fuser code to be fully async #112
Conversation
examples/simple.rs
Outdated
@@ -425,7 +425,7 @@ impl SimpleFS { | |||
} | |||
|
|||
impl Filesystem for SimpleFS { | |||
fn init(&mut self, _req: &Request, _config: &mut KernelConfig) -> Result<(), c_int> { | |||
fn init(&self, _req: &Request, _config: &mut KernelConfig) -> Result<(), c_int> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for something to be async friendly requiring interior mutability in the file system seems to be really a requirement
Been using https://github.com/filebench/filebench to explore the performance/behavior of this, Current config is something like:
Benchmarks are comparing this personality across fuser with xmp migrated to fuser from zargony/fuse-rs#141 The async code in here stackfs -- https://github.com/sbu-fsl/fuse-stackfs And raw file system. Comparison is done here via an SSD on my machine:
|
721014e
to
df5f911
Compare
Wow, this looks really exciting, thanks! I've been wanting to add async for a while. I'll try to look it over this weekend and provide some feedback |
Great, look forward to it thanks! Playing around some more to try work on making the benchmarks more reproducible has been interesting: but default features in the crate means very small bump in perf -- due to the low write size, i think its basically a huge number of serial tiny 4k writes just swamp everything, so little gains to be had. Enabling (This is using cgroups to assign 2 logical cores to filebench and 3 logical cores to the client process be it rust or c code. hyper threading enabled/workstation so little fuzzyness on the exact cpu allocation/usage -- done to make numbers more stable.)
the blocksize for the operation has a huge effect on the efficiency seemingly of the async one, i'm guessing this is just the serial latency through the kernel is the limiting factor all told, but wouldn't be 100% on that. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall this looks about like I expected! I skimmed some of it, so may have missed something. I left a few questions and comments.
Also, there are a bunch of style check failures. Can you run the default make
target? It should run all the checks for you.
ea66545
to
00b0fed
Compare
I addressed all the comments here and did some more error handling/clean up some more comments and such. The failing test now is related to osx and mounting -- it seems setting O_NONBLOCK on the macfuse implementation of fuse is failing. Something that could be queried upstream, but otherwise i'm not sure of a simple work around. Overall this is kinda a large/breaking delta and would require tokio in the deps for this. An alternate thought here is that we/I could split this crate into 3 of them: If we were to add in future an async async-std type implementation or other async based ones we could name them fuser-* and then have an fuser-async shared crate of code. (Alternative here is to have fuser-async and use feature flags on it for the impl?). Thoughts? |
Cool! I'll take another look, and also try building
Ah ya, I've definitely noticed that macfuse has different behavior in some cases. I would just special case it. Can you take a look at the Github Actions test? It looks like they didn't run, and that's where most of the test coverage is. It includes things like the xfstests suite. I'd like to keep everything in this crate. I know some other projects split features into different crates, but I don't really like that approach. Feature flags sound good, and will make this easy to merge & release, without immediately breaking backwards compatibility. I haven't been following async-std vs tokio. Is async-std expected to become stable at some point? |
Yeah I’m afraid i edited/added another test into the github actions which seems to have disabled them in this repo. (I wanted to get the Mac ones running in github actions since the appveyor thing takes for ever). But I’ll revert it here, and try add the github actions to a separate PR so more of the coverage is done there/Better latency where possible. |
For the Mac one i think since no O_NONBLOCK I’ll have to spawn the operations into the blocking thread pool for the reads. Will see about setting the flags up to control that behavior and get it working. It’ll be less performant by a little i imagine, but its all a bit of black magic how the Mac stuff works that i can work out, its all closed source in the main impl |
Ya, it's too bad macfuse switched to closed source :( I wrote a pure Rust |
Switching my other project, fleetfs, was pretty easy. Was almost entirely mechanical changes like the SimpleFS example. However, there seems to be a memory leak. I'm not sure if it's in your async changes, or something I missed when switching fleetfs over to the async code. I'll look into it, if the Github CI tests on this PR pass |
ab7e4cb
to
1cde8d6
Compare
Ok -- so it does all pass now for osx/linux. -> I refactored some of the handling out into a sub mod. Probably should be a feature flag to pick either blocking or async handling there. But both work happily -> I replaced any timeouts with using oneshot channels so tear down happens faster -- the xfstests break if the shutdown takes more than some relatively low number of MS -> Even after I did that, some of the tests combined with other tests seem to still flake, i changed the xfstests for now to have a 1 second sleep after issuing the umount which has fixed them all. Let me know if this seems like something we can't have and i can try see if there is more time we can shave off. (Or possibly its a case of we fire a umount in the channel drop and it drops the new mount if thats possible). |
Cool! I'll take a look this weekend |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall this is looking good! I left a few comments.
Do you know why the xftests fail without the sleep? I do think that needs to be fixed. The xfstest test suite is pretty much the gold standard for verifying that a filesystem works correctly, so I'd guess it's uncovering a real race or some other issue
src/channel.rs
Outdated
Ok(res) | ||
} | ||
|
||
fn internal_new_on_err_cleanup( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function doesn't seem to do anything? It always returns Ok
fuse_session, | ||
) { | ||
Ok(r) => Ok(r), | ||
Err(err) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See above, I don't think this code path can trigger
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After more iterations, i'm not sure the inner function here is adding much value at this point -- but it can fail via
Channel::create_sub_channels(&fd, worker_channel_count)?
The design/approach of the 2 layers was to just allow the inner function to sprinkle ?
around everywhere and then the outer function would clean up. Now, though only one spot is fallible, so i'll collapse these and handle directly failure to create the sub channels.
src/io_ops/blocking_io.rs
Outdated
#[derive(Debug, Clone)] | ||
pub struct SubChannel { | ||
fd: FileDescriptorRawHandle, | ||
shared: bool, // If its a shared file handle when asked to close, noop. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems error prone. Is there a way to make FileDescriptorRawHandle
close on drop, and then pass around a reference counted file handle?
src/io_ops/nonblocking_io.rs
Outdated
#[derive(Debug, Clone)] | ||
pub struct SubChannel { | ||
fd: Arc<AsyncFd<FileDescriptorRawHandle>>, | ||
shared: bool, // If its a shared file handle when asked to close, noop. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same comment about relying on Drop
instead
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sg, i think Arc should be free to use/access outside of drop/clone, and not involve any indirection, so either using a single Arc wrapping each value or shared Arc's should achieve the same purpose and be much simpler. Will update.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at this more, there is a little bit of awkwardness that we want to be able to call close -- but we won't know everything has dropped. (Once FD closes, so close the others to make it all shutdown). I've refactored it now i think to be more clear, use Arc + Drop, but also for the inner FD we do have a close method that synchronizes using an atomic boolean to ensure we only call it once.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW as part of my work removing as much unsafe as possible I've been replacing the raw fd handing (and calls to libc::write, libc::close, etc.) with a normal std::fs::File
. That way we have confidence that things are only closed once and there are no data races because such things are impossible without unsafe.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a safe/guaranteed operation for fuse? I had wondered when looking at this, but I couldn't find any documentation on either side saying the atomic operation's that fuse uses in the kernel would be safe/respected going via the file API. This code relies on using iovec writes which are guaranteed atomic in the fuse kernel handling. Will the std::fs::File API make the same guaranteed?(Wondering if stuff winds up getting chunked, written in smaller/multiple writes it could get interleaved or not as one fuse operation?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unless I’m mistaken, I think your taking some of these guarantee’s more strongly than they are intended. — write_vectored doesn’t guarantee that there is one underlying write call — indeed it has a default fall back to call write on the first non-empty buffer (which would be an invalid write in our use case here). There is a nightly API to query if the write would be vectored. This would corrupt the fuse operations afaik.
(One attempt doesn’t necessarily mean one system call no? It means if there’s a failure or partial write at the OS level it won’t attempt to call write again on that and instead would return Err or Ok(num bytes) processed. Since we explicitly require the system call parity i don’t think these are quite the same strength of a contract. )
I guess for these internal ones where the fuse kernel is abusing the file system API, I’m pretty slow to move to a higher level API not designed for this. The current implementations for Unix look right to me (though we pass the FD around more and explicitly call close to terminate operations which isn’t supported in the std::fs::File api), but I believe they could remain within their contract and break fuse in subtle ways(iovec write being an example).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we pass the FD around more and explicitly call close to terminate operations which isn’t supported in the std::fs::File api
This was one of my motivations for moving to the File
API. I was working through the fuse_sys code and found a few places where it looked like we could close()
twice or perform operations after the close. By using the File
API and removing the unsafe
blocks we'd be forced to get this right.
The exception is when linking against libfuse3 where the FD is by rights owned by the libfuse session, but my plan there would be to do File::into_raw_fd
before destroying the session.
where the fuse kernel is abusing the file system API
Fuse isn't that special in this regard. The concerns about atomic writes would apply to named pipes just the same (PIPE_BUF
)
I believe they could remain within their contract and break fuse in subtle ways (iovec write being an example).
Rust being a systems language I think this is safe to assume that a single call to write_vectored
results in a single call to writev
. The Rust maintainers take backwards compatibility very seriously, Changing this behaviour would break users that reasonably rely on it, as such would be a compatibility break.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
write_vectored
calling writev
is explicitly not in the file API though as an example. The current implementation on Unix does this. But the API does not require/and by default doesn’t do this.
(there’s also the requirement that you have a mutable handle on the File in order to read/write from it in the std::fs::File api that I’m aware which wouldn’t mix well with the current API where its presumed that users can use the Reply passed in from another thread — unless we wrap it in mutex locks?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Anyway I’m relatively new to this library so I’d defer to @cberner as the maintainer, this is a low level library, so I’d be inclined to say it should aim to use the bare metal API if its at all ambiguous. Limiting the access surface of the unsafe/using Arc’s and drop behaviors should all have i believe the same effect as a reduced surface File API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even as the maintainer, I'm only medium familiar with this part of the code, since I took it over from the original author :)
I'd like to have as little unsafe
code as possible. For the specific case of write_vectored
though, I think we should stick with writev
until there's a way to check that write_vectored
is atomic. Debugging problems from a non-atomic write_vectored
implementation would be frustrating
join_handles.push(tokio::spawn(async move { | ||
let r = | ||
Session::spawn_worker_loop(active_session, ch, receiver, filesystem, idx).await; | ||
// once any worker finishes/exits, then then the entire session shout be shut down. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some typos in this comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reworded
@@ -0,0 +1,1012 @@ | |||
//! Analogue of fusexmp | |||
//! | |||
//! See also a more high-level example: https://github.com/wfraser/fuse-mt/tree/master/example |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wrong link?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually cribbed this right from the fuse-rs pr https://github.com/zargony/fuse-rs/pull/141/files, but its quite possibly out of date/incorrect, i'lll take a look tomorrow
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The link is intentional i think since this seems to be a port of the code in https://github.com/wfraser/fuse-mt/blob/master/example/src/passthrough.rs which is aimed at the libfuse C bindings. (I think the reference here is fusexmp is low level/C, the rust one is higher level?). Open to new wording or changing anything here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm actually in the process of changing fuse-mt
over to fuser
now fyi, heh
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wfraser cool! btw, what do you think of making the fuser
API async
?
I'm surprised that using async helped XMP performance. I'd expect async not to help much as the underlying FS APIs are all synchronous, so typically async FS apis are just punting operations to a threadpool. This just imposes more copying and synchronisation overhead. Is the "RUST" (e.g. sync) example you benchmarked above using multiple threads, or just a single thread? |
ee2bf18
to
0f0330c
Compare
If you mean that the old process would race and unmount the new process' mountpoint, I think that case should be handled already. libfuse has logic to check that the fd is still valid before unmounting, and I implemented the same logic in |
@cberner Thats pretty much exactly what i had been telling myself was causing this. Let me go take another look at it again |
Cool, will be interested in what you find! It's certainly possible the logic to guard against that race is broken |
3d7a07f
to
8796eaf
Compare
I tracked this down to a few poorly interacting things:
(3) combined with (1) or (2), means that the process exits 1 second after, and will unmount whatever new fuse session is in place, which is what is breaking the tests. |
bd80c5c
to
bb05382
Compare
A simpler example of I believe hitting the same race condition is: it is pretty hard to make a repro with the master code since it has the blocking reads/and it shuts down so quickly. Though if the user code does anything after replying in some actions it can cause it. (If you block before/arbitrarily then unmount will complain that there's open activity on the file system) |
Ah cool, nice find! |
src/io_ops/nonblocking_io.rs
Outdated
use std::time::Duration; | ||
use tokio::time::timeout; | ||
loop { | ||
if let Ok(guard_result) = timeout(Duration::from_millis(1000), self.fd.readable()).await |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of a 1sec timeout, is it possible to wait on both the read and the terminated
future that's passed into Channel::receive
? I think that would remove any unmount delay
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the terminated future is only satisfied by one of these inner reads firing and triggering the exit cascade. -- we actually do wait on it outside of here, the outer select!
should get satisfied based on terminated if/when it happens.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, maybe I misunderstood your previous comment, "(3) combined with (1) or (2), means that the process exits 1 second after". Is that 1sec delay caused by this timeout, or something else?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I probably didn’t explain this super well from an IO/FD perspective if we go through the last request done:
- User/someone calls
umount
on mount point - Poll on socket
readiness
fires recv
on socket gets a Statfs request(for some ABI’s the kernel always issues this after an unmount it seems)readiness
will return true on next tryrecv
ReturnsE_WOULDBLOCK
readiness
blocks- ..some delay in kernel...
- FD handles become invalid
- Timeout fires in code
- Call
recv
again — returns INVALID FD i think is the error message
The issue is around step 8
we would expect readiness
Call to return and wake things up, but this doesn’t seem to happen with fuse, since the FD isn’t closed/marked as readable by the kernel, its just invalid somehow. The 1 second timeout/timeout in general is an arbitrary workaround for this issue. (Maybe there is some lower level thing if we used epoll directly that does fire? )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, and where in that sequence do we receive the Operation::Destroy
? Since the code does select!
on the terminated
signal, it seems like that would happen before the timeout fires and avoid the 1sec delay
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might have changed in the mean time, but libfuse/libfuse@b9ea32f with the original changelog entry would explain never seeing it:
* Add a synchronous DESTROY message to kernel interface. This is
invoked from umount, when the final instance of the filesystem is
released. It is only sent for filesystems mounted with the
'blkdev' option for security reasons.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, well that's good to know! Ok, I probably need to read through some of this again with that in mind.
So, if I understand correctly then, previously we only used libc::read
which returns immediately when the FD becomes invalid. However, now we call readiness
and that blocks for an unknown (and long) delay after the FD is invalidated, which is why we need a timeout now. Is that right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep, thats it exactly. It seems that the kernel component of fuse must just kill off blocked read operations. But whatever way it works(or possibly interacts with tokio's asyncfd stack/mio in rust), poll calls don't get woken up the same way. If the file handle got closed then the readiness would wake properly. But something 'special' seemingly about fuse happens where where it becomes invalid. so it doesn't wake.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, ya it now makes sense why this races and causes more failures than before :/ Perhaps we should make the timeout configurable, and have a single worker loop that uses a short (~1ms) timeout, and the rest of them use the 1sec timeout
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah i had been avoiding too low a timeout, but 1ms for just one FD seems reasonable enough to me. I've added it so the root/original FD will now have a 1ms timeout. And re-enabled the mount tests. It is still a race thats possible in the code but it should be much less likely?
(Also lots of churn since i rebased on the master changes around the API separation)
} | ||
|
||
impl FileDescriptorRawHandle { | ||
pub fn new(fd: RawFd) -> Self { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this function should be unsafe
, and fd
should private too, otherwise it can be used from safe code to close or read from any arbitrary FD.
} | ||
|
||
/// Receives data up to the capacity of the given buffer (can block). | ||
fn blocking_receive(fd: &FileDescriptorRawHandle, buffer: &mut [u8]) -> io::Result<Option<usize>> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should also be unsafe
, fd
is not guaranteed to point at an valid owned fd.
pub fn close(&self) { | ||
let already_closed = self | ||
.is_closed | ||
.swap(true, std::sync::atomic::Ordering::SeqCst); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The atomic here guarantees that a file is only closed once, but there are no guarantees that read or write aren't called on the file after it has been closed. This is unsafe as FDs are reused.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unsafe
in rust's definition isn't generally taken to include this type of logic behavior anymore/generally afaik.
nomicon: https://doc.rust-lang.org/nomicon/what-unsafe-does.html
&& similar types in say tokio are not marked as unsafe, e.g. https://docs.rs/tokio/1.2.0/tokio/io/unix/struct.AsyncFd.html
(the new here based on the logic above should be unsafe(though it was discussed in their api, and they decided not to), since there's no guarantee the FD exists, isn't duplicated or anything else).
this stuff we should limit how much it can go wrong(try make it as hard/impossible as we can) and put the right privacy controls in place, but from what i can work out in rust these days the unsafe keyword is more kept for memory safety.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(if you have access to a more concrete guide i'd love to see it, does seem to be lots of confusion in this space in general that i've seen on different crates.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
similar types in say tokio are not marked as unsafe, e.g. https://docs.rs/tokio/1.2.0/tokio/io/unix/struct.AsyncFd.html
Note: AsyncFd<T>
is not the same. In that case of AsyncFd<File>
The AsyncFd
owns the File
and the file owns the underlying FD and is responsible closing it. In the case of AsyncFd<RawFd>
the AsyncFd
owns the RawFd
, but the RawFd
is really only a (non-owning) reference to the underlying fd, and is not responsible for closing it. Indeed RawFd
does nothing on drop
, it's just a typedef for c_int
.
This is the reason that FromRawFd::from_raw_fd is unsafe:
This function is also unsafe as the primitives currently returned have the contract that they are the sole owner of the file descriptor they are wrapping. Usage of this function could accidentally allow violating this contract which can cause memory unsafety in code that relies on it being true.
AsyncFd<RawFd>
will call poll
on the underlying FD, but this should be safe as poll shouldn't have side-effects. Calling read
or write
or close
do have side-effects, so are unsafe.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unsafe in rust's definition isn't generally taken to include this type of logic behavior anymore/generally afaik. ...these days the unsafe keyword is more kept for memory safety.
To be clear: this isn't unsafe because it itself violates memory safety, but because it could cause memory unsafety in other code that does expect to be the sole owner of a file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does reading different data than expected by an interleaving seek count as memory in safety? That’s the main sort of memory issue I could think of. But do you have something else in mind ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Memory mapping a file via the fd? Just curious here as to how these interact.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://users.rust-lang.org/t/why-is-fromrawfd-unsafe/39670/5 Seems to suggest this is an historical artifact from an older definition of unsafe ? Or I’m not really following still.
Specifically when it comes to this code I was planning to look at more code level things when the overall api/behaviors seemed sane — I think we probably want to migrate some of the sub channel setup code from channel into the per-type files. And make it all private with the goal the exposure should be Safer api that doesn’t expose the raw fd in construction or anywhere else.
in the vein of how we might want to merge this if we didn't want to break everyone/or allow other async apis multi-crate: https://github.com/ianoc/fuser/tree/ianoc/multi-crate attempts better encapsulation of different bits of the stack for the aync bits so we can add more/flag more in, with a much more reduced set of deps to be 'always on'. thoughts @cberner ? |
The way I'd imagine this working in the future is separate crates. The low level ones would be:
Then on top of that there would be higher level crates implemented that tie the two together. This would be the equivalent of the current FWIW I think the enum based API could work really well for async. Rust is limited by the lack of async in traits - but an enum based API wouldn't be bound in the same way. |
I'd like to go with feature flags, and not a multi-crate. I'm open to hearing arguments for why multi-crate would be better though. My reasoning is that publishing separate crates is only useful if they have separate public APIs which users will build on top of. And, I don't really want to support a Another reason is trust, which I realize is subjective, but I dislike the approach of having many crates named "project-*" because it makes it difficult to know which are official crates of the project. For example, I know "tokio" is an official crate, and I know "tokio-util" is too (but only because I've looked it up). However, there are a bunch of other crates, like "tokio-file", and I don't know if they're official or not. By having a single crate with feature flags, it's clear to users that everything is from the official project. |
You've convinced me. @ianoc: I like how your async_api is a module preserving backwards compatiblity. I'd quite like to put the existing sync API in a module too, but that's for another PR. |
Superceeded by doing separate smaller PRs and flagging |
Exploring migrating the API/code to be async friendly.
Due to the nature of it, without some heavy features usage i'm not sure one can not have the tokio dependency.
This is really an exploration/thing for thoughts and feedback. Making it all async here is a pretty large change alas, so might not make sense to merge.
needs more clean up regardless.