Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enh: trace in-driver execution time. Extended logging buffer #171

Merged
merged 2 commits into from
Aug 16, 2019

Conversation

bpintea
Copy link
Collaborator

@bpintea bpintea commented Aug 15, 2019

This PR adds the possibility to account for the time the execution
spends within the driver. For that, it simply adds up the ticks
accounted between entering and exiting any ODBC API call.

With it, now one can estimate roughly how much time the driver code
runs, as well as how long the driver waits for the REST calls (previously
available).

This time accounting is done globally and possibly across multiple
threads. Its mostly useful with single-threaded use, though.

The PR also adds the possibility to use a much larger logging
"extended" buffer. This is useful in troubleshooting cases where larger
logging meesages are required (such as when an entire server REST reply
is JSON object is needed).

Both of these features are disabled by default (for all build types).

This commit adds the possibility to account for the time the execution
spends within the driver. For that, it simply adds up the ticks
accounted between entering and exiting any ODBC API call.

With it, now one can estimate roughly how much time the driver code
runs, as well as how long the driver waits for the REST calls.

This time accounting is done globally and possibly across multiple
threads. Its mostly useful with single-threaded use, though.

The commit also adds the possibility to use a much larger logging
"extended" buffer. This is useful in troubleshooting cases where larger
logging meesages are required (such as when an entire server REST reply
is JSON object is needed).

Both of these features are disabled by default (for all build types).
Copy link
Collaborator

@edsavage edsavage left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

driver/odbc.c Outdated
@@ -24,20 +24,16 @@
RET_HDIAGS(hnd, SQL_STATE_HYC00); \
} while (0)

#ifdef WITH_OAPI_TIMING
volatile LONG64 api_ticks = 0;
clock_t thread_local in_ticks;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

explicitly initialise in_ticks

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks, updated.

explicitly init global var
@bpintea bpintea merged commit 259b724 into elastic:master Aug 16, 2019
@bpintea bpintea deleted the enh/ext_log_buff_api_timing branch August 16, 2019 11:10
bpintea added a commit that referenced this pull request Aug 28, 2019
* Enh: trace in-driver execution time; ext log buff.

This commit adds the possibility to account for the time the execution
spends within the driver. For that, it simply adds up the ticks
accounted between entering and exiting any ODBC API call.

With it, now one can estimate roughly how much time the driver code
runs, as well as how long the driver waits for the REST calls.

This time accounting is done globally and possibly across multiple
threads. Its mostly useful with single-threaded use, though.

The commit also adds the possibility to use a much larger logging
"extended" buffer. This is useful in troubleshooting cases where larger
logging meesages are required (such as when an entire server REST reply
is JSON object is needed).

Both of these features are disabled by default (for all build types).

* addressing PR review note

explicitly init global var

(cherry picked from commit 259b724)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants