Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attachments take ages to upload and view #24266

Open
smcardle opened this issue Jan 24, 2022 · 14 comments
Open

Attachments take ages to upload and view #24266

smcardle opened this issue Jan 24, 2022 · 14 comments

Comments

@smcardle
Copy link

smcardle commented Jan 24, 2022

We have an issue after upgrading our Rocket.Chat install to the latests, server / client and MongoDB (see versions below)

After upgrading, first to wiredTiger, then to version 4.2 / 4.4 then 5.0 all attachments added to chats i.e. images / audio / video etc take an inordinate amount of time.
Also clicking on these attachments also take an inordinate amount of time to show or start playing.
By inordinate I mean 3 - 5 minutes to show images or upload new attachments as oppose to just a few seconds previously.
None of our attachments are particularly large and affect ALL existing attachments (viewing) and new attachments (upload and or view)

The stack trace

I get the following stack traces in the Rocket.Chat logs which seems to indicate a missing parameter OR missing indexes. Any ideas how to fix this will be appreciated..

ufs: cannot read file "mC3nrWQ3cY7ifoJwz" (Executor error during find command :: caused by :: Sort exceeded memory limit of 104857600 bytes, but did not opt in to external sorting.) MongoError: Executor error during find command :: caused by :: Sort exceeded memory limit of 104857600 bytes, but did not opt in to external sorting.
at MessageStream.messageHandler (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/cmap/connection.js:272:20)
at MessageStream.emit (events.js:314:20)
at MessageStream.EventEmitter.emit (domain.js:483:12)
at processIncomingData (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/cmap/message_stream.js:144:12)
at MessageStream._write (/app/bundle/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/cmap/message_stream.js:42:5)
at doWrite (_stream_writable.js:403:12)
at writeOrBuffer (_stream_writable.js:387:5)
at MessageStream.Writable.write (_stream_writable.js:318:11)
at Socket.ondata (_stream_readable.js:718:22)
at Socket.emit (events.js:314:20)
at Socket.EventEmitter.emit (domain.js:483:12)
at addChunk (_stream_readable.js:297:12)
at readableAddChunk (stream_readable.js:272:9)
at Socket.Readable.push (stream_readable.js:213:10)
at TCP.onStreamRead (internal/stream_base_commons.js:188:23)
at TCP.callbackTrampoline (internal/async_hooks.js:126:14) {
ok: 0,
code: 292,
codeName: 'QueryExceededMemoryLimitNoDiskUseAllowed',
'$clusterTime': {
clusterTime: Timestamp { bsontype: 'Timestamp', low: 2, high
: 1642705737 },
signature: { hash: [Binary], keyId: 0 }
},
operationTime: Timestamp { bsontype: 'Timestamp', low: 2, high
: 1642705737 }

Server Setup Information:

  • Version of Rocket.Chat Server: 4.3.2
  • Operating System: Ubuntu 20.04
  • Deployment Method: Docker
  • Number of Running Instances: 1
  • DB Replicaset Oplog: --oplogSize 128 --replSet rs0 --storageEngine=wiredTiger (single instance replica set)
  • NodeJS Version: v12.22.8
  • MongoDB Version: 5.0.5
  • Using GridFS for file storage

Client Setup Information

  • Desktop App or Browser Version: Both
  • Operating System: Multiple

Steve

@ankar84
Copy link

ankar84 commented Jan 24, 2022

Using GridFS for file storage

That is the reason.
Use minio as S3 file storage.

You could use migrator tool: GitHub - RocketChat/filestore-migrator

@smcardle
Copy link
Author

Will the migration tool migrate all existing attachments ?

@ankar84
Copy link

ankar84 commented Jan 24, 2022

Will the migration tool migrate all existing attachments ?

I don't know actually, I saw that advice here

We are using S3 from start without any performance issues.

@tobiasreinhard
Copy link

I have the same issue. #24225

@smcardle
Copy link
Author

smcardle commented Jan 25, 2022

OK. So it seems the mongorestore while upgrading failed to created the indexes on all of the rocketchat_uploads / rocketchat_uploads.files / rocketchat_uploads.chunks collections.
Only the primary key _id indexs are created

This is also true for the rocketchat_avatar collections

These collections are currently set as gridFS and the amount of time to upload a new file or view an existing file is just unbearable.

Does anybody know where I can find a current list of the indexes that should exists on collections for RocketChat Version 4.3.2

Regards

@dantematik
Copy link

Hi, i have the same thing (runnig docker images) ... looooooading very slow. I used the migration tool also did a fresh import from the backup. always the same issue. I also went thrue the Versions 3.18.5 Mongo DB Upgrade ... then Mongo 4.2 (chat 4.4.2) same issues. Database is running local around 5 GB big. Any progress on this topic.

@artemChernitsov
Copy link

artemChernitsov commented Apr 12, 2022

I had the same issue
Looks like after database migration by RocketChat manual all forget about indexes of uploads.
It is solved by this comment #23467 (comment)

Standalone setup

mongo
use rocketchat
db.rocketchat_uploads.chunks.createIndex( { files_id: 1, n: 1 }, { unique: true } )

Docker setup

docker exec -it mongodb mongo
use rocketchat
db.rocketchat_uploads.chunks.createIndex( { files_id: 1, n: 1 }, { unique: true } )

@tobiasreinhard
Copy link

I had the same issue Looks like after database migration by RocketChat manual all forget about indexes of uploads. It is solved by this comment #23467 (comment)

Standalone setup

mongo
use rocketchat
db.rocketchat_uploads.chunks.createIndex( { files_id: 1, n: 1 }, { unique: true } )

Docker setup

docker exec -it mongodb mongo
use rocketchat
db.rocketchat_uploads.chunks.createIndex( { files_id: 1, n: 1 }, { unique: true } )

Thank you for your effort, i will try this ASAP and leave feedback here.

@smcardle
Copy link
Author

I actually use MongoDB Compass and luckily had another install I could find the indexes for.
After adding these back manually all worked fine.

Next... Change gridfs to use an S3 bucket

@Kevinsky86
Copy link

Kevinsky86 commented Apr 29, 2022

I had the same issue Looks like after database migration by RocketChat manual all forget about indexes of uploads. It is solved by this comment #23467 (comment)

Standalone setup

mongo
use rocketchat
db.rocketchat_uploads.chunks.createIndex( { files_id: 1, n: 1 }, { unique: true } )

Docker setup

docker exec -it mongodb mongo
use rocketchat
db.rocketchat_uploads.chunks.createIndex( { files_id: 1, n: 1 }, { unique: true } )

For anybody running into this using snap:

To get into mongo:

snap run rocketchat-server.mongo

To find databases:

show dbs

List will show up. Identify database that holds rocket tables. (parties in my case)

use parties

Verify table is there:

show collections

And then do the thing

db.rocketchat_uploads.chunks.createIndex( { files_id: 1, n: 1 }, { unique: true } )

Let it do it's thing.

exit

@ziplex
Copy link

ziplex commented Feb 1, 2023

I had the same issue Looks like after database migration by RocketChat manual all forget about indexes of uploads. It is solved by this comment #23467 (comment)

Standalone setup

mongo
use rocketchat
db.rocketchat_uploads.chunks.createIndex( { files_id: 1, n: 1 }, { unique: true } )

Docker setup

docker exec -it mongodb mongo
use rocketchat
db.rocketchat_uploads.chunks.createIndex( { files_id: 1, n: 1 }, { unique: true } )

Artem thanks it works!

@TheWrongGuy
Copy link

TheWrongGuy commented Mar 15, 2023

Using GridFS for file storage

That is the reason. Use minio as S3 file storage.

This is NOT True!

I had the same issue Looks like after database migration by RocketChat manual all forget about indexes of uploads. It is solved by this comment #23467 (comment)

Standalone setup

mongo
use rocketchat
db.rocketchat_uploads.chunks.createIndex( { files_id: 1, n: 1 }, { unique: true } )

Docker setup

docker exec -it mongodb mongo
use rocketchat
db.rocketchat_uploads.chunks.createIndex( { files_id: 1, n: 1 }, { unique: true } )

This is true and fixes the issue for me too in a RC 6.0 Setup. Thank you!

@Ruppsn
Copy link

Ruppsn commented May 25, 2023

Thanks. We had massive problems with high CPU and Block I/O after the upgrade. This fixed it for us. We were using the official RC manual. This step is missing.

docker exec -it mongodb mongo
use rocketchat
db.rocketchat_uploads.chunks.createIndex( { files_id: 1, n: 1 }, { unique: true } )

Thank you

@nikolaysu
Copy link

nikolaysu commented Feb 5, 2024

In my case it doesn't work
I really didn’t have indexes and because of this, downloading files was very slow and uploading took a lot of time.
After creating the indexes,
db.rocketchat_uploads.chunks.createIndex( { files_id: 1, n: 1 }, { unique: true } )
everything became very fast, but the error did not go away.

`MongoServerError: Executor error during find command :: caused by :: Sort exceeded memory limit of 104857600 bytes, but did not opt in to external sorting.
at Connection.onMessage (/opt/Rocket.Chat.6.4.8/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/cmap/connection.js:231:30)
at MessageStream. (/opt/Rocket.Chat.6.4.8/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/cmap/connection.js:61:60)
at MessageStream.emit (events.js:400:28)
at MessageStream.emit (domain.js:475:12)
at processIncomingData (/opt/Rocket.Chat.6.4.8/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/cmap/message_stream.js:125:16)
at MessageStream._write (/opt/Rocket.Chat.6.4.8/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb/lib/cmap/message_stream.js:33:9)
at writeOrBuffer (internal/streams/writable.js:358:12)
at MessageStream.Writable.write (internal/streams/writable.js:303:10)
at Socket.ondata (internal/streams/readable.js:731:22)
at Socket.emit (events.js:400:28)
at Socket.emit (domain.js:475:12)
at addChunk (internal/streams/readable.js:293:12)
at readableAddChunk (internal/streams/readable.js:267:9)
at Socket.Readable.push (internal/streams/readable.js:206:10)
at TCP.onStreamRead (internal/stream_base_commons.js:188:23)
at TCP.callbackTrampoline (internal/async_hooks.js:130:17)
=> awaited here:
at Function.Promise.await (/opt/Rocket.Chat.6.4.8/programs/server/npm/node_modules/meteor/promise/node_modules/meteor-promise/promise_server.js:56:12)
at app/file-upload/server/lib/FileUpload.ts:755:14
at /opt/Rocket.Chat.6.4.8/programs/server/npm/node_modules/meteor/promise/node_modules/meteor-promise/fiber_pool.js:43:40
=> awaited here:
at Function.Promise.await (/opt/Rocket.Chat.6.4.8/programs/server/npm/node_modules/meteor/promise/node_modules/meteor-promise/promise_server.js:56:12)
at server/lib/dataExport/uploadZipFile.ts:35:12
at /opt/Rocket.Chat.6.4.8/programs/server/npm/node_modules/meteor/promise/node_modules/meteor-promise/fiber_pool.js:43:40
=> awaited here:
at Function.Promise.await (/opt/Rocket.Chat.6.4.8/programs/server/npm/node_modules/meteor/promise/node_modules/meteor-promise/promise_server.js:56:12)
at server/lib/dataExport/processDataDownloads.ts:232:25
at /opt/Rocket.Chat.6.4.8/programs/server/npm/node_modules/meteor/promise/node_modules/meteor-promise/fiber_pool.js:43:40 {
ok: 0,
code: 292,
codeName: 'QueryExceededMemoryLimitNoDiskUseAllowed',
'$clusterTime': {
clusterTime: new Timestamp({ t: 1706894125, i: 1 }),
signature: {
hash: new Binary(Buffer.from("874904064985896744cfa9ec317ddb6dc4cf1517", "hex"), 0),
keyId: new Long("7325743666008424449")
}
},
operationTime: new Timestamp({ t: 1706894125, i: 1 }),
[Symbol(errorLabels)]: Set(0) {}
}

UP: so that there are no errors when exporting user data

db.rocketchat_userDataFiles.chunks.createIndex( { files_id: 1, n: 1 }, { unique: true } )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants