Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: appservice #204

Merged
merged 7 commits into from
May 11, 2021
Merged

feat: appservice #204

merged 7 commits into from
May 11, 2021

Conversation

johannescpk
Copy link
Contributor

@johannescpk johannescpk commented Apr 15, 2021

Follow-up tracking issue: #228


Todo

  • Figure out how to allow proper manually syncing in addition: this probably requires appservice specific handling of the sync token
  • Expose user/room queries from the homeserver as events on the eventhandler
  • Move actix auth check into middleware
  • Persist info about registered virtual users
  • Improve documentation
  • Try E2EE: @MTRNord pointed out that the mautrix-whatsapp bridge does it by joining the main appservice user into all channels additionally to the virtual user and have it handle the keys. That sounds like the most useful approach as long as synapse doesn't push to-device events to appservices, since otherwise it'd need to /sync for every virtual user
  • Support hyper (maybe merge in from matrix-appservice-rs)
  • More tracing
  • Rename client_with_localpart to client_with_identity_assertion
  • Move get_host_and_port_from_registration into AppserviceRegistration::get_host_and_port
  • Add run_with_callback(sync_response: SyncResponse)
  • Try cargo test --exclude matrix_sdk_appservice to work around unwanted appservice feature-activation in the main crate: that would allow to get rid of feat: appservice #204 (comment). The issue here is that it'd no longer be possible to just cargo test in the workspace root
  • Check if it's make sense to introduce an InnerAppservice for Arc-purposes, or single Arcs for member fields (from feat: appservice #204 (comment))
  • Try to implement the conversion from transaction to sync response directly in ruma
  • Implement new for appservice scoped IncomingRequests in ruma (behind incoming-ctors feature) so we can get rid of activating the unstable-exhaustive-types feature
  • Make tests work if appservice feature is enabled, it currently breaks things as the tests rely on mockito URL matching, and the appservice feature appends the user_id query GET parameter. Possible solution: Add something like assert_identity: bool to the RequestConfig instead of changing the behavior based on the feature flag

The general idea is that incoming appservice transactions from the homeserver are translated into an api::client::r0::sync::sync_events::Response and then passed into receive_sync_response.

The user associated with the client Session is the "main appservice user" (sender_localpart in the registration). This allows for the state store to properly propagated and an API surface similar to a regular client while still handling most events coming in over transactions:

let config = AppserviceConfig::new(homeserver_url, server_name, appservice_registration)?;
let appservice = Appservice::new(config).await?;

struct ExampleHandler {}
impl ExampleHandler {
    pub fn new() -> Self {
        Self {}
    }
}

#[async_trait]
impl EventHandler for ExampleHandler {
    async fn on_state_member(
        &self,
        room: Room,
        event: &SyncStateEvent<MemberEventContent>,
    ) {
        dbg!(room, event);
    }
}

appservice
    .client()
    .set_event_handler(Box::new(Example::new()))
    .await;

appservice.receive_transaction(incoming_transaction).await;

This draft only handles AnyStateEvent::RoomMember with MembershipState::Join for now. If the general direction sounds like something you'd consider supporting I'd keep completing the approach.

There's also support in ruma now to assign user_ids to OutgoingRequests, so another API surface that I'd like to add is something like appservice.client_with_user_id() which returns a client that attaches the given user_id to all requests.

Another topic is whether the SDK wants to expose the actual webserver that receives HTTP requests from the homeserver. Currently there's only an actix example included, but it could be considered to expose it behind a feature flag like appservice-actix or something.

And finally, this isn't E2EE-ready. I guess it would need a crypto store instance for every virtual appservice user.

Copy link
Contributor

@poljar poljar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally this looks really neat. I know this is an initial version but I pointed out that we should be a bit better with the docs anyways.

I need to double check if it's ok to have a crate in the repo that I'm technically not maintaining myself, and after we fix those couple small nits we can merge this.

// this should be unreachable since assert_identity on request_config can only be set
// with the appservice feature active
#[cfg(not(feature = "appservice"))]
return Err(HttpError::NeedsAppserviceFeature);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm it feels a bit wrong to have an error that never gets returned. Can't we restructure that somehow so we don't need this error?

Copy link
Contributor Author

@johannescpk johannescpk May 10, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed to cfg-based feature-switch, also allows restricting all assert_identity things to the appservice feature. The catch here is that because of crate-interdependence currently the appservice feature seems to be always enabled when running commands in the crate root. So cargo test in the root will run with the appservice feature in matrix-sdk active. Something we could do is running cargo test --workspace --exclude matrix-sdk-appservice in CI and the appservice tests with an additional command? I'll open a PR to rework the test-features CI pipeline and try how that would look.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've integrated the CI changes here directly, with #226 being a possible cleanup to reduce CI time

matrix_sdk/src/client.rs Outdated Show resolved Hide resolved
matrix_sdk_appservice/src/lib.rs Outdated Show resolved Hide resolved
matrix_sdk_appservice/src/lib.rs Show resolved Hide resolved
matrix_sdk_appservice/src/lib.rs Show resolved Hide resolved

/// Appservice
#[derive(Debug, Clone)]
pub struct Appservice {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't we want to put at least some of those behind an Arc? The Client already is but the registration might end up being quite big and costly to clone, no (I have no idea how big those files might end up being)?

Copy link
Contributor Author

@johannescpk johannescpk May 10, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. I'll check which Arc-strategy would make sense and would work on that preferably in a follow-up: Noted in the Todos, will move them into a tracking issue if landing should happen after your double check.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since they are read-only I think we can just put them inside an Arc and we're good to go, no?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, probably doesn't need a lock 👍

matrix_sdk_appservice/src/lib.rs Show resolved Hide resolved
matrix_sdk_appservice/src/lib.rs Outdated Show resolved Hide resolved
@@ -37,6 +43,7 @@
//! .unwrap();
//!
//! let appservice = Appservice::new(homeserver_url, server_name, registration).await.unwrap();
//! // set event handler with `appservice.client().set_event_handler()` here
Copy link
Contributor Author

@johannescpk johannescpk May 10, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Plan is to refactor that into a compiling run_with_callback example instead and commenting that with "using event handler is also available", since constructing an event handler for the quick start is a bit heavy

@johannescpk johannescpk force-pushed the feat/appservice branch 2 times, most recently from eb876e3 to d9453b4 Compare May 10, 2021 07:47
Copy link
Contributor

@poljar poljar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is fine to merge, that is if you want to take care of the rest of the TODOs and the Arcing in separate PRs.


/// Appservice
#[derive(Debug, Clone)]
pub struct Appservice {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since they are read-only I think we can just put them inside an Arc and we're good to go, no?

@johannescpk
Copy link
Contributor Author

Thanks for the feedback so far!

I think this is fine to merge, that is if you want to take care of the rest of the TODOs and the Arcing in separate PRs.

Yeah, I'd definitely continue working on it. If you prefer I could do it in this PR as well, but otherwise with tracking issue & follow-up PRs. Also if you think at some point that it might make sense to externalize the crate, feel free to let me know

@poljar
Copy link
Contributor

poljar commented May 11, 2021

No, let's merge. Smaller PRs are way easier to review

@poljar poljar merged commit 4c09c62 into matrix-org:master May 11, 2021
@johannescpk johannescpk deleted the feat/appservice branch May 11, 2021 07:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants