-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple user collaboration for data section #10712
Conversation
…ry builder or not
…ng loading states
…prevent access to the design and automate pages
… grid column labels. Allow deletion of duplicated user columns
@@ -27,6 +27,7 @@ export enum Databases { | |||
GENERIC_CACHE = "data_cache", | |||
WRITE_THROUGH = "writeThrough", | |||
LOCKS = "locks", | |||
SOCKET_IO = "socket_io", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This redis namespace is never used because socket.io generates it's own key names, so I've literally only added this to be consistent with other usages of constructing instances of our redis wrapper, as technically we could still manually use that client instance. So I think it's still prudent to reserve a dedicated namespace.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With this - is there anyway we could get the socket.io layer to use its own Redis database? This way its in a completely separate keyspace and won't show up in any key/scan lists, could help with performance of certain features which enumeration a lot of keys.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since it's only using redis for pub/sub, I'm not sure anything it actually sets is persistent. It's all just ephemeral messages which are sent to subscribers but then dropped. At least I'm pretty sure that's how it works.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah I see - so it doesn't show up in the keyspace at all, thats perfect then it won't affect anything else in terms of performance!
@@ -12,21 +13,9 @@ function pickApi(tableId: any) { | |||
return internal | |||
} | |||
|
|||
function getTableId(ctx: any) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just moved this to utils so I can reuse it elsewhere.
@@ -48,18 +49,16 @@ export async function save(ctx: Ctx) { | |||
if (!view.meta.schema) { | |||
view.meta.schema = table.schema | |||
} | |||
table.views[viewName] = view.meta | |||
table.views[viewName] = { ...view.meta, name: viewName } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There were some weird inconsistencies with where the name of a view was stored, especially in the frontend code that calls this endpoint. I've rewritten most of that, and now I'm also just ensuring that the view name is properly saved the view itself.
@@ -61,7 +61,6 @@ if (env.isProd()) { | |||
|
|||
const server = http.createServer(app.callback()) | |||
destroyable(server) | |||
initialiseWebsockets(app, server) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Websocket initialisation has been moved to startup.ts
, as it needs to be done after redis has been initialised, so that we can use the socket.io redis adapter.
// they do have lock, update it | ||
await updateLock(appId, ctx.user) | ||
// If this user already owns the lock, then update it | ||
if (await doesUserHaveLock(appId, ctx.user)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now that we allow multiple users, we don't want to error if the user doesn't have the lock. If they don't have the lock, we just ignore them.
If the user does own the lock, then we update the expiration time.
Codecov Report
❗ Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more. @@ Coverage Diff @@
## develop #10712 +/- ##
========================================
Coverage 50.03% 50.03%
========================================
Files 153 153
Lines 5144 5144
Branches 1028 1028
========================================
Hits 2574 2574
Misses 2354 2354
Partials 216 216
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
|
||
// we init this as we want to keep the connection open all the time | ||
// We need to maintain a duplicate client for socket.io pub/sub | ||
let socketSubClient: any |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The socket.io redis adapter requires two separate redis clients, one for pub and one for sub. I can't use another instance of our redis wrapper class for this, as they reuse the same underlying redis clients depending on the "selectDb" flag. The easiest way to handle this was simply maintain a duplicate of the core redis client here, and ensure it's also cleaned up when appropriate.
Added a bunch of comments to explain the various server changes, but would appreciate a review in case anything should be moved around or done differently 🙏 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Looking forward to playing with this.
tables.replaceTable(id, table) | ||
}) | ||
|
||
// Table events |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Table events | |
// Datasource events |
?
export let lockedAction | ||
|
||
$: editing = app?.lockedBy != null | ||
$: initials = helpers.getUserInitials(app?.lockedBy) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused?
@@ -77,6 +78,7 @@ export async function save(ctx: UserCtx) { | |||
ctx.eventEmitter && | |||
ctx.eventEmitter.emitTable(`table:save`, appId, savedTable) | |||
ctx.body = savedTable | |||
builderSocket.emitTableUpdate(ctx, savedTable) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be beneficial to update the tests to allow us to mock builderSocket
and confirm it's called with the correct data. The same applies for datasource, rows, etc.
I'm not sure of the performance impact of setting up multiple sockets, but it'd be preferable to avoid this in unit tests
socket.leave(currentRoom) | ||
} | ||
|
||
// Join new room | ||
currentRoom = tableId | ||
socket.join(currentRoom) | ||
socket.to(currentRoom).emit("user-update", socket.data.user) | ||
socket.to(currentRoom).emit("user-update", user) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice to enumerate our socket events, for discovery and reuse.
Edit - do not merge. Still have issues with conflicts when attempting to acquire usage quota docs which is being investigated, although I don't believe that's specifically part of this feature, but is highlighted by this feature.
2nd edit - may as well merge this anyway and see what the conflicts are like in a real env.
Description
This PR allows multiple users to collaborate on the same app. For now, the extent of that collaboration is limited to the data section only. Secondary users will not be able to see or edit anything in the design or automation sections.
Brief summary of changes:
Editing an app as the first users
If you're the first user to edit an app, nothing changes. You can open it as normal and work away. The only visual change for this first user is that you'll see your own user avatar displayed in the top right of the builder:
Editing an app when someone else is already editing it
Previously if someone was editing an app, we displayed a lock symbol in the app list page and we prevented other users from clicking it. The lock has been removed, but we still indicate if another user is using your app. As a secondary user looking at the app list, it will now look like this:
We can still click into it, but we show that another user is currently editing it. It's important to note here - the user we show is the "primary" user that has full control of the app. There could be 10 other users also inside the app, but we only show the person who currently has control.
Clicking into the app now, we are presented with a cut-down version of the builder:
The immediate visual differences are:
The primary user will see that a user has connected, and a new avatar will be displayed in the top right for them too.
Grid collaboration
If you're using the same grid as another user, you'll be able to see what they're doing:
Opening multiple tabs
You're able to open as many tabs as you like. Every tab will receive changes made by other users and function exactly the same. You won't show up multiple times in the avatar list though, because you're the same user.
The only time you'll see a visual difference with multiple tabs, is if you edit the same grid in multiple tabs. In this scenario everyone (including yourself) will be able to see all your selected cells. e.g. here's the same user with 3 tabs open, all looking at the same grid:
From this other user's perspective, there is one other user editing the same app, but they have 3 sessions open and have selected 3 different cells.
From your own perspective, you'll be able to see all the other cells selected by you:
When you have multiple tabs open, you'll retain control as the "primary" builder until all tabs are closed.
Avatar standardisation
Previously we used multiple different avatars across the platform. These all showed different colours and different initials, so there was no consistency at all. I've standardised everything to use the same new component now, which ensures that no matter where you are inside the builder, the same user will always have the same avatar. Their colour will always be the same, and so will the initials inside the avatar.
Global users will appear identically to their app user equivalent as well.
Technical stuff
A new "builder" websocket is being used to propagate changes in real time. This means there are now 3 websockets used altogether:
All sockets continue to be readonly. The API is used as normal to make changes, and those changes are piped down the websocket to all users.
The new building websocket handles all CRUD events for:
I've improved the performance of all sockets so that they now actually send down the new changed docs, as opposed to simply sending a notification that a certain doc ID changed, causing all clients to fetch the new version. This change is more scalable and prevents API load spikes whenever someone else makes a change.
Socket.io (the library we use for websockets) offers a redis adapter which can be used to proxy messages between multiple server instances. This is what I'm using to ensure compatibility with scalable environments. I've been unable to test this locally, so I'll only really know if it works once we get this up on QA (and we might need to manually scale up the number of server pods just to test this). I've confirmed that redis pub/sub messages are indeed being sent so I'm assuming this is working.
There's also some unknowns regarding how websockets will perform in prod, considering the multiple layers of proxies and load balancers. We'll have to see what happens.
Clarification of the lock system changes
The lock system is still in use. Instead of being used to control who the only editor of an app is, it's now used to control who the "primary" user is. The lock is still acquired in middleware, and there's still a timeout to ensure idle people still get kicked. The only change made to the lock system is a faster removal of the lock when a user disconnects. The new builder websocket means we can immediately detect when a user stops editing an app, for any reason. When we detect this socket disconnecting, we check if that was their only session in the app, and if so we remove the app lock. This is much nicer than having to wait 10 minutes and explain to other users what was happening.
Addresses: