Hello dear reader, thank you for adopting version 4.x of the MongoDB Node.js driver, from the bottom of our developer hearts we thank you so much for taking the time to upgrade to our latest and greatest offering of a stunning database experience. We hope you enjoy your upgrade experience and this guide gives you all the answers you are searching for. If anything, and we mean anything, hinders your upgrade experience please let us know via JIRA. We know breaking changes are hard but they are sometimes for the best. Anyway, enjoy the guide, see you at the end!
We've migrated the driver to Typescript! Users can now harness the power of type hinting and intellisense in editors that support it to develop their MongoDB applications. Even pure JavaScript projects can benefit from the type definitions with the right linting setup. Along with the type hinting there's consistent and helpful docs formatting that editors should be able to display while developing. Recently we migrated our BSON library to TypeScript as well, this version of the driver pulls in that change.
If you are a user of the community types (@types/mongodb) there will likely be compilation errors while adopting the types from our codebase. Unfortunately we could not achieve a one to one match in types due to the details of writing the codebase in Typescript vs definitions for the user layer API along with the breaking changes of this major version. Please let us know if there's anything that is a blocker to upgrading on JIRA.
We now require node 12.9 or greater for version 4 of the driver. If that's outside your support matrix at this time, that's okay! Bug fix support for our 3.x branch will not be ending until summer 2022, which has support going back as far as Node.js v4!
Affected classes:
AbstractCursor
FindCursor
AggregationCursor
ChangeStreamCursor
- This is the underlying cursor for
ChangeStream
- This is the underlying cursor for
ListCollectionsCursor
Our Cursor implementation has been updated to clarify what is possible before and after execution of an operation. Take this example:
const cursor = collection.find({ a: 2.3 }).skip(1);
for await (const doc of cursor) {
console.log(doc);
fc.limit(1); // bad.
}
Prior to the this release there was inconsistency surrounding how the cursor would error if a setting like limit was applied after cursor execution had begun.
Now, an error along the lines of: Cursor is already initialized
is thrown.
You cannot use ChangeStream as an iterator after using as an EventEmitter nor visa versa. Previously the driver would permit this kind of usage but it could lead to unpredictable behavior and obscure errors. It's unlikely this kind of usage was useful but to be sure we now prevent it by throwing a clear error.
const changeStream = db.watch();
changeStream.on('change', doc => console.log(doc));
await changeStream.next(); // throws: Cannot use ChangeStream as iterator after using as an EventEmitter
Or the reverse:
const changeStream = db.watch();
await changeStream.next();
changeStream.on('change', doc => console.log(doc)); // throws: Cannot use ChangeStream as an EventEmitter after using as an iterator
The Cursor no longer extends Readable directly, it must be transformed into a stream by calling cursor.stream(), for example:
const cursor = collection.find({});
const stream = cursor.stream();
stream.on('data', data => console.log(data));
stream.on('end', () => client.close());
Cursor.transformStream()
has been removed. Cursor.stream()
accepts a transform function, so that API was redundant.
With type hinting users should find that the options passed to a MongoClient are completely enumerated and easily discoverable.
In 3.x there were options, like maxPoolSize
, that were only respected when useUnifiedTopology=true
was enabled, vs poolSize
when useUnifiedTopology=false
.
We've de-duped these options and put together some hefty validation to help process all options upfront to give early warnings about incompatible settings in order to help your app get up and running correctly quicker!
We internally now only manage a Unified Topology when you connect to your MongoDB. The differences are described in detail here.
Feel free to remove the useUnifiedTopology
and useNewUrlParser
options at your leisure, they are no longer used by the driver.
NOTE: With the unified topology, in order to connect to replicaSet nodes that have not been initialized you must use the new directConnection
option.
Specifying username and password as options is only supported in these two formats:
new MongoClient(url, { auth: { username: '', password: '' } })
new MongoClient('mongodb://username:password@myDb.host')
Specifying checkServerIdentity === false
(along with enabling tls) is different from leaving it undefined
.
The 3.x version intercepted checkServerIdentity: false
and turned it into a no-op function which is the required way to skip checking the server identity by nodejs.
Setting this option to false
is only for testing anyway as it disables essential verification to TLS.
So it made sense for our library to directly expose the option validation from Node.js.
If you need to test TLS connections without verifying server identity pass in { checkServerIdentity: () => {} }
.
gssapiServiceName
has been removed.
Users should use authMechanismProperties.SERVICE_NAME
like so:
- In a URI query param:
?authMechanismProperties=SERVICE_NAME:alternateServiceName
- Or as an option:
{ authMechanismProperties: { SERVICE_NAME: 'alternateServiceName' } }
The only option that required the use of the callback was strict mode.
The strict option would return an error if the collection does not exist.
Users who wish to ensure operations only execute against existing collections should use db.listCollections
directly.
For example:
const collections = (await db.listCollections({}, { nameOnly: true }).toArray()).map(
({ name }) => name
); // map to get string[]
if (!collections.includes(myNewCollectionName)) {
throw new Error(`${myNewCollectionName} doesn't exist`);
}
In 3.x we exported both the names above, we now only export MongoBulkWriteError
.
Users testing for BulkWriteError
s should be sure to import the new class name MongoBulkWriteError
.
The Db instance is no longer an EventEmitter, all events your application is concerned with can be listened to directly from the MongoClient
instance.
The collection group()
helper has been deprecated in MongoDB since 3.4 and is now removed from the driver.
The same functionality can be achieved using the aggregation pipeline's $group
operator.
The deprecated GridStore API has been removed from the driver. For more information on GridFS see the mongodb manual.
Below are some snippets that represent equivalent operations:
// old way
const gs = new GridStore(db, filename, mode[, options])
// new way
const bucket = new GridFSBucket(client.db('test')[, options])
Since GridFSBucket uses the Node.js Stream API you can replicate file seek-ing by using the start and end options creating a download stream from your GridFSBucket
bucket.openDownloadStreamByName(filename, { start: 23, end: 52 });
await client.connect();
const filename = 'test.txt'; // whatever local file name you want
const db = client.db();
const bucket = new GridFSBucket(db);
fs.createReadStream(filename)
.pipe(bucket.openUploadStream(filename))
.on('error', console.error)
.on('finish', () => {
console.log('done writing to db!');
bucket
.find()
.toArray()
.then(files => {
console.log(files);
bucket
.openDownloadStreamByName(filename)
.pipe(fs.createWriteStream('downloaded_' + filename))
.on('error', console.error)
.on('finish', () => {
console.log('done downloading!');
client.close();
});
});
});
Notably, GridFSBucket does not need to be closed like GridStore.
Deleting files hasn't changed much:
GridStore.unlink(db, name, callback); // Old way
bucket.delete(file_id); // New way!
File metadata that used to be accessible on the GridStore instance can be found by querying the bucket
const fileMetaDataList: GridFSFile[] = bucket.find({}).toArray();
The automatic MD5 hashing has been removed from the upload family of functions.
This makes the default Grid FS behavior compliant with systems that do not permit usage of MD5 hashing.
The disableMD5
option is no longer used and has no effect.
If you still want to add an MD5 hash to your file upload, here's a simple example that can be used with any hashing algorithm provided by Node.js:
const bucket = new GridFSBucket(db);
// can be whatever algorithm is supported by your local openssl
const hash = crypto.createHash('md5');
hash.setEncoding('hex'); // we want a hex string in the end
const _id = new ObjectId(); // we could also use file name to do the update lookup
const uploadStream = fs
.createReadStream('./test.txt')
.on('data', data => hash.update(data)) // keep the hash up to date with the file chunks
.pipe(bucket.openUploadStreamWithId(_id, 'test.txt'));
const md5 = await new Promise((resolve, reject) => {
uploadStream
.once('error', error => reject(error))
.once('finish', () => {
hash.end(); // must call hash.end() otherwise hash.read() will be `null`
resolve(hash.read());
});
});
await db.collection('fs.files').updateOne({ _id }, { $set: { md5 } });
NODE-3368
: make name prop on error classes read-only (#2879)NODE-3291
: standardize error representation in the driver (#2824)NODE-3272
: emit correct event type when SRV Polling (#2825)NODE-1812
: replace returnOriginal with returnDocument option (#2803)NODE-3157
: update find and modify interfaces for 4.0 (#2799)NODE-2961
: clarify empty BulkOperation error message (#2697)NODE-1709
: stop emitting topology events fromDb
(#2251)NODE-2704
: integrate MongoOptions parser into driver (#2680)NODE-2757
: add collation to FindOperators (#2679)NODE-2602
: createIndexOp returns string, CreateIndexesOp returns array (#2666)NODE-2936
: conform CRUD result types to specification (#2651)NODE-2590
: adds async iterator for custom promises (#2578)NODE-2458
: format sort in cursor and in sort builder (#2573)NODE-2820
: pull CursorStream out of Cursor (#2543)NODE-2850
: only store topology on MongoClient (#2594)NODE-2423
: deprecateoplogReplay
for find commands (24155e7)
NODE-2752
: remove strict/callback mode from Db.collection helper (#2817)NODE-2978
: remove deprecated bulk ops (#2794)NODE-1722
: remove top-level write concern options (#2642)NODE-1487
: remove deprecated Collection.group helper (#2609)NODE-2816
: remove deprecated find options (#2571)NODE-2320
: remove deprecated GridFS API (#2290)NODE-2713
: removeparallelCollectionScan
helper (#2449) (9dee21f)NODE-2324
: remove Cursor#transformStream (#2574) (a54be7a)NODE-2318
: remove legacy topology types (6aa2434)NODE-2560
: remove reIndex (#2370) (6b510a6)NODE-2736
: remove the collection save method (#2477) (d5bb496)NODE-1722
: remove top-level write concern options (#2642) (6914e87)NODE-2506
: remove createCollection strict mode (#2506) (bb13764)NODE-2562
: remove geoHaystackSearch (#2315) (5a1b61c)NODE-3427
: remove md5 hashing from GridFS API (#2899) (a488d88)NODE-2317
: remove deprecated items (#2740) (listed below)
Collection.prototype.find / findOne
options:fields
- useprojection
instead
Collection.prototype.save
- useinsertOne
insteadCollection.prototype.dropAllIndexes
Collection.prototype.ensureIndex
Collection.prototype.findAndModify
- usefindOneAndUpdate
/findOneAndReplace
insteadCollection.prototype.findAndRemove
- usefindOneAndDelete
insteadCollection.prototype.parallelCollectionScan
MongoError.create
Topology.destroy
Cursor.prototype.each
- useforEach
insteadDb.prototype.eval
Db.prototype.ensureIndex
Db.prototype.profilingInfo
MongoClient.prototype.logout
MongoClient.prototype.addUser
- creating a user without rolesMongoClient.prototype.connect
Remove MongoClient.isConnected
- calling connect is a no-op if already connectedRemove MongoClient.logOut
require('mongodb').instrument
- Use command monitoring:
client.on('commandStarted', (ev) => {})
- Use command monitoring:
- Top-Level export no longer a function:
typeof require('mongodb') !== 'function'
- Must construct a MongoClient and call
.connect()
on it.
- Must construct a MongoClient and call
- Removed
Symbol
export, nowBSONSymbol
which is a deprecated BSON type- Existing BSON symbols in your database will be deserialized to a BSONSymbol instance; however, users should use plain strings instead of BSONSymbol
- Removed
connect
export, useMongoClient
construction
And that's a wrap, thanks for upgrading! You've been a great audience!