-
Notifications
You must be signed in to change notification settings - Fork 150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: another attempt at fixing the flaky BulkWriter test #1228
Conversation
Codecov Report
@@ Coverage Diff @@
## node10 #1228 +/- ##
==========================================
- Coverage 98.59% 98.42% -0.18%
==========================================
Files 28 30 +2
Lines 18301 18474 +173
Branches 1403 1423 +20
==========================================
+ Hits 18044 18183 +139
- Misses 254 288 +34
Partials 3 3
Continue to review full report at Codecov.
|
dev/test/bulk-writer.ts
Outdated
// there are floating promises, not waiting here can cause a race | ||
// condition where the first batch's request has not been made by the | ||
// time the second request is sent. | ||
await requestMadeDeferred.promise; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am a little confused here... isn't the test meant to send off a lot of writes as fast as possible, and as such it would be beneficial not to wait?
It seems like what we should do is:
setTimeoutHandler((_, timeout) => {
if (!timeoutCalled) {
timeoutCalled = true;
// expect(requestCounter).to.equal(0); This is racy without your
// deferred promise, but not needed for the purpose of the test
expect(timeout).to.be.greaterThan(0);
// done(); Maybe remove this so we don't have any outstanding
// work when the test completes.
}
});
for (let i = 0; i < 600; i++) {
bulkWriter.set(firestore.doc('collectionId/doc' + i), {foo: 'bar'});
}
await bulkWriter.close();
expect(timeoutCalled).to.be.true;
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Post conversation:
The original issue with this simplified test was that it still passes even if the timeout handler is called for the first batch. To fix this, the BulkWriter tests now check that the timeout handler wasn't called after each individual test.
I tried removing the done()
, but since it's based on the actual clock, the timeout handler will be called multiple times. If we just call the timeout function directly in the overridden handler, we will loop until enough time passes for the rate limiter to allow the request. Though there could still be outstanding work once the test completes, it shouldn't affect other tests since each test creates a new BulkWriter instance.
489260d
to
4b5251d
Compare
a3629ae
to
b46e184
Compare
I also removed the response generation logic since the rate limiter test isn't testing the request/response logic. This also allows the test to pass independently of what the maximum batch size is. |
const responses = mergeResponses( | ||
Array.from(new Array(requestLength), (_, i) => successResponse(i)) | ||
); | ||
return response({ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a pretty great simplification.
dev/test/bulk-writer.ts
Outdated
it.skip('does not send batches if doing so exceeds the rate limit', async () => { | ||
// The test is considered a success if BulkWriter tries to send the second | ||
// batch again after a timeout. | ||
afterEach(() => setTimeoutHandler(setTimeout)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we probably don't need this anymore since this is called above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed.
dev/test/bulk-writer.ts
Outdated
} | ||
await bulkWriter.close(); | ||
if (!timeoutCalled) { | ||
done(new Error('Expected the timeout handler to be called')); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's actually an error to invoke done() twice. This should show up in the console.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Either the done()
in the timeout handler will be called, setting timeoutCalled
to true
, or the error done()
will be called. Calling done()
twice will cause the test to fail.
dev/test/bulk-writer.ts
Outdated
for (let i = 0; i < 600; i++) { | ||
bulkWriter.set(firestore.doc('collectionId/doc' + i), {foo: 'bar'}); | ||
} | ||
await bulkWriter.close(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you don't wait for the close to finish (i.e. no dangling promise in this test), then verifyInstance
(line 189) might become flaky since it verifies that at the end of each test all operations have finished executing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
talked separately: The 2nd request will never fire since we are not invoking the callback function in the overridden handler. This means that the the handler will be trigger twice: once when the 2nd request is made, and once when the first request is returned.
const arrayRange2 = [500, 501, 502, 503, 504]; | ||
const requests2 = arrayRange2.map(i => setOp('doc' + i, 'bar')); | ||
const responses2 = arrayRange2.map(i => successResponse(i)); | ||
it('does not send batches if doing so exceeds the rate limit', done => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Final nit: This test should get a more obvious name "Uses timeout for batches that exceed the rate limit" for example.
* fix!: mark v1beta1 client as deprecated (#937) * feat!: use QueryDocumentSnapshot in FirestoreDataConverter (#965) * deps: update to gts 2.x (#1013) * chore!: update settings for Node 10 (#1019) * deps: drop through2 (#1014) * feat: support BigInt (#1016) * fix: make update.sh work on Linux (#1043) * fix: only use BigInt in BigInt system test (#1044) * fix: make pbjs compile admin proto again (#1045) * Add BulkWriter (#1055) * docs: Add documentation for FirestoreDataConverter (#1059) * chore: enforce return types (#1065) * fix: add generic to Firestore.getAll() (#1066) * chore: remove internal WriteOp (#1067) * chore: add linter checks for it|describe.only (#1068) * fix: handle terminate in BulkWriter (#1070) * chore: run template copying last in synthtool (#1071) * feat: Firestore Bundles implementation (#1078) * feat: add support for set() with SetOptions when using FirestoreDataConverter (#1087) * feat: Add totalDocuments and totalBytes to bundle metadata. (#1085) * feat: Add totalDocuments and totalBytes to bundle metadata. * fix: Better comment * fix: Better testing. * fix: Improve metadata testing. * fix: incomplete expect in rate-limiter test (#1092) * Remove BatchWrite proto, fix conformance tests * chore: use public API types internally (#1100) * feat: add Partition and BatchWrite protos (#1110) * fix: remove GCF transaction fallback (#1112) * fix: add BulkWriter integration tests, java backport changes, delete fix (#1117) * chore: merge master (#1218) * chore: add eslint check for console.log statements (#1229) * fix: another attempt at fixing the flaky BulkWriter test (#1228) * Fix comment * Renames * Test fix * Fix unit tests Co-authored-by: Brian Chen <chenbrian@google.com> Co-authored-by: wu-hui <53845758+wu-hui@users.noreply.github.com>
No description provided.