-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature: add redshift extension #301
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
this.logger.debug( | ||
`Errors occurred, release connection from ${profileName}` | ||
); | ||
redshiftClient.destroy(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does .destory
function here close the connection? We might reuse these clients so we should not close connections when error occurs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've deleted .destroy
function
|
||
// definition of describeStatementResponse.Status | ||
// https://github.com/aws/aws-sdk-js-v3/blob/29056f4ca545f7e5cf951b915bb52178305fc305/clients/client-redshift-data/src/models/models_0.ts#L604 | ||
while (!describeStatementResponse || describeStatementResponse.Status !== 'FINISHED') { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we can add some exponential backoffs while polling status.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added this package to enable exponential backoff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! I meant we can break between sending requests to avoid exceeding AWS API rate limiting. But I think your approach is okay, we'll retry automatically when the API returns an error.
while (!describeStatementResponse || describeStatementResponse.Status !== 'FINISHED') {
const describeStatementCommand = new DescribeStatementCommand(describeStatementRequestInput);
describeStatementResponse = await redshiftClient.send(describeStatementCommand)
if (
describeStatementResponse.Status === 'ABORTED' ||
describeStatementResponse.Status === 'FAILED'
) {
throw describeStatementResponse.Error
}
// Sleep here before the next iteration
}
"Id": describeStatementResponse.Id | ||
}; | ||
const getStatementResultCommand = new GetStatementResultCommand(getStatementResultCommandParams); | ||
const getStatementResultResponse = await redshiftClient.send(getStatementResultCommand); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like the result might return in batches, we ought to handle the NextToken
of the response, store the token then fetch the next chunk data when we get request from downstream might be a good idea.
new Readable({
...
read() {
if(nextToken) {
// GetStatementResultCommand with nextToken
}
}
})
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've handled the NextToken
condition at line 155.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, we won't lose data in this way.
If we encounter performance issues in the future, we could consider taking backpressure in Node.js streams into account~
I have no further issues, many thanks! |
Codecov ReportPatch coverage is 📢 Thoughts on this report? Let us know!. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @oscar60310 for reviewing! ❤️
Thanks for fixing the test case issue, LGTM 👍
Description
Basically, the PR content is the same as the closed one #297. However, I updated the content based on code review feedback from @kokokuo
Issue ticket number
None
Additional Context