Skip to content
This repository has been archived by the owner on Nov 6, 2020. It is now read-only.

Initial ethcore-io refactor #8185

Closed
debris opened this issue Mar 22, 2018 · 15 comments
Closed

Initial ethcore-io refactor #8185

debris opened this issue Mar 22, 2018 · 15 comments
Assignees
Labels
F6-refactor 📚 Code needs refactoring. M4-core ⛓ Core client code / Rust. P5-sometimesoon 🌲 Issue is worth doing soon.
Milestone

Comments

@debris
Copy link
Collaborator

debris commented Mar 22, 2018

Currently ethcore-io event loop is based on deprecated part of mio. Let's replace it with tokio

@debris debris added F6-refactor 📚 Code needs refactoring. M4-core ⛓ Core client code / Rust. P5-sometimesoon 🌲 Issue is worth doing soon. labels Mar 22, 2018
@tomaka
Copy link
Contributor

tomaka commented Mar 22, 2018

Can you actually use tokio (actual question, I don't know ethcore-io enough to know)?
I think the Core and the Handle are not Send or Sync, so you cannot send sockets or (futures that contain a socket) amongst threads.

@5chdn 5chdn modified the milestones: 1.12, 1.11 Mar 23, 2018
@0x7CFE
Copy link
Contributor

0x7CFE commented Mar 26, 2018

@tomaka, please correct me if I'm wrong, but I do not see a problem here.

Looks like Parity internals use util::io facade to get even loop functionality. And if we look at the API we may see, that most of the Message generics are bounded by Send + Sync.

I even tried forcing + Sync bound to IoChannel, IoMessage, Handlers and other stuff and it all compiled, which makes me think, that at the moment we do not have messages that require Send or Sync.

Could you please give me some more details on a part of the system where you expect problems? What particular Core and Handle you've mentioned before?

@0x7CFE 0x7CFE self-assigned this Mar 26, 2018
@tomaka
Copy link
Contributor

tomaka commented Mar 26, 2018

Right now the sockets are identified by a StreamToken, which is just a usize, but it is essentially the mio events loop that owns the sockets.

If you switch to tokio, however, you would need to manipulate TcpStream objects instead of usizes, and these are not Send or Sync if I'm not mistaken (could be wrong here).
Since TcpStream implements traits from the futures crate, the code that uses ethcore_io is supposed to manipulate them directly. What I mean by that is that the role they play is more than simply identifiers.

It should be possible to wrap around tokio's API so that each TcpStream stays in a dedicated I/O processing threads, and use channels from std::sync::mpsc in order to communicate data, but that would add a large overhead I think.

Could you please give me some more details on a part of the system where you expect problems? What particular Core and Handle you've mentioned before?

https://docs.rs/tokio-core/0.1.16/tokio_core/reactor/struct.Core.html
https://docs.rs/tokio-core/0.1.16/tokio_core/reactor/struct.Handle.html

@tomaka
Copy link
Contributor

tomaka commented Mar 26, 2018

Also, once you build a future you need to send it back to ethcore_io in the correct thread so that it is passed to the Core::run function. This further complicates the problem.

@tomaka
Copy link
Contributor

tomaka commented Mar 26, 2018

Another problem is that even if you track which thread each TcpStream belongs to, a particular user of ethcore_io may want to merge multiple TcpStream objects together, and these streams may belong to multiple different threads.

I think tokio in general is not meant to be used from multiple different threads.
It could make sense to use only a single worker and make Parity as a whole entirely "single-threaded" except for futures-cpupool which would manage background CPU processing threads. But that's a big decision to make.

@0x7CFE
Copy link
Contributor

0x7CFE commented Mar 26, 2018

@tomaka, thank you very much for the detailed explanation! It is much clearer now. I would try to find a solution.

@debris
Copy link
Collaborator Author

debris commented Mar 26, 2018

cc @tomaka

If you switch to tokio, however, you would need to manipulate TcpStream objects instead of usizes, and these are not Send or Sync if I'm not mistaken (could be wrong here).

You could also split &TcpStream into ReadHalf and WriteHalf. There would still be a single place owning streams, but halfs could be safely passed between threads

@debris
Copy link
Collaborator Author

debris commented Mar 26, 2018

wouldn't that be enough?

@debris
Copy link
Collaborator Author

debris commented Mar 26, 2018

ah, actually that would not work... That's how I 'solved' this problem in parity-bitcoin

@tomaka
Copy link
Contributor

tomaka commented Mar 26, 2018

I don't think Arc<TcpStream> would solve the problem, because it wouldn't be Send either.

I don't know how parity-bitcoin is organized, but from a quick look you don't seem to use a thread for I/O (and therefore not facing the same issues).

@0x7CFE
Copy link
Contributor

0x7CFE commented Mar 26, 2018

Maybe we could solve the issue by not trying to send stuff that isn't Send in the first place? I may imagine an architecture, where requests are delegated to TcpStream (that resides in the same thread where it was created) using messages that are dispatched by the event loop.

Was suggested above already.

@0x7CFE
Copy link
Contributor

0x7CFE commented Mar 26, 2018

If you switch to tokio, however, you would need to manipulate TcpStream objects instead of usizes, and these are not Send or Sync if I'm not mistaken (could be wrong here).

@tomaka, as for versions 0.1.x I believe they are implemented both in tokio and tokio_core. I even tried to move TcpStream between threads on playground and rustc didn't complain.

@tomaka
Copy link
Contributor

tomaka commented Mar 26, 2018

Oh, I guess I was mistaken then.
But I suppose that you still have to run them in the correct thread afterwards, right?

@twittner
Copy link
Contributor

I think more recent mio versions and tokio >= 0.1 are Send + Sync from tip to toe (except for explicit current thread execution), so moving a TcpStream over thread boundaries or using it concurrently should not be an issue AFAICT.

@5chdn 5chdn modified the milestones: 1.11, 1.12 Apr 24, 2018
@5chdn 5chdn modified the milestones: 2.0, 2.1 Jul 17, 2018
@5chdn 5chdn modified the milestones: 2.1, 2.2 Sep 11, 2018
@5chdn 5chdn removed this from the 2.2 milestone Oct 29, 2018
@5chdn 5chdn added this to the 2.3 milestone Oct 29, 2018
@5chdn 5chdn modified the milestones: 2.3, 2.4 Jan 10, 2019
@5chdn 5chdn modified the milestones: 2.4, 2.5 Feb 21, 2019
@soc1c soc1c modified the milestones: 2.5, 2.6 Apr 2, 2019
@ordian ordian modified the milestones: 2.6, 2.7 Jul 12, 2019
@adria0
Copy link

adria0 commented Jul 27, 2020

Closing the issue due to its stale state

@adria0 adria0 closed this as completed Jul 27, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
F6-refactor 📚 Code needs refactoring. M4-core ⛓ Core client code / Rust. P5-sometimesoon 🌲 Issue is worth doing soon.
Projects
None yet
Development

No branches or pull requests

8 participants