-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use execute_query for introspection #155
Conversation
// Let's do some stored procedures introspection | ||
let mut stored_procs_query = SQL::new(); | ||
RawSql::RawText(STORED_PROCS_CONFIGURATION_QUERY.to_string()).to_sql(&mut stored_procs_query); | ||
let mut stored_procs_rows = Vec::new(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't feel like I understand what's going on here with this Vec and execute_query
. From my understanding there is only one row returned by the introspection query... but the way this is implemented, each row's string is just blindly concatenated into the same array of chars here (stored_procs_row
is Vec<u8>
). That seems... dubious.
My quick reading of the tiberius API we're using is that we should be taking a QueryStream and calling into_first_result
, which gives a Vec of Row. Then we should extract our JSON from the one column in the one row we find there and error if we get more than one row because why are our JSON queries returning multiple rows...
The fact that our logic reads multiple rows and munges all data from the first column in each row into a single string is scary. I can't find anything my quick check of the SQL Server docs or AI queries that indicate that a JSON string would be split across multiple rows which is what the execute_query code seems to be saying will happen.
Edit: Is it because of this? https://dba.stackexchange.com/a/279614 If so, let's document that inside execute_query and rename our variables to not refer to "rows" since there's really only one and the butchered "rows" sent over the wire is just some crazy XML streaming stuff :/
Also maybe let's refactor execute_query
to just return the string rather than having a mutable buffer passed in... I can't see the reason for wanting that sort of mutable API for this rather than a straight functional one. We can do that separately, of course.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll create a ticket to do this in a follow up. Good call out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a big improvement, let's merge this and do the other identified improvements in a follow up. You'll need to run just format
to get rid of the cargo fmt
errors.
<!-- The PR description should answer 2 (maybe 3) important questions: --> ### What Fix a problem in #155 where pulling `query-engine-translation` into the configuration crate was pulling `openssl` in. Turns out we're still pulling `ndc-models` from the `ndc-sdk` here (probably from before `ndc-models` became it's own crate). ### How Use `ndc-models` directly instead of `ndc-sdk`.
02c5e35
to
2bbab40
Compare
What
The configuration introspection was using
select_first_row
which was not properly streaming all results and had a limits which caused introspections to fail on larger sets of tables.How
This removes
select_first_row
and usesexecute_query
to fetch all introspection data.