Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Metrics instrumentation flask #1186

Merged

Conversation

TheAnshul756
Copy link
Contributor

@TheAnshul756 TheAnshul756 commented Jul 7, 2022

Description

Fixes #1155

Type of change

Please delete options that are not relevant.

  • New feature (non-breaking change which adds functionality)

How Has This Been Tested?

  • tox -e test-instrumentation-flask

Does This PR Require a Core Repo Change?

  • No.

Checklist:

See contributing.md for styleguide, changelog guidelines, and more.

  • Followed the style guidelines of this project
  • Changelogs have been updated
  • Unit tests have been added

@TheAnshul756 TheAnshul756 requested a review from a team July 7, 2022 10:54
@linux-foundation-easycla
Copy link

linux-foundation-easycla bot commented Jul 7, 2022

CLA Signed

The committers listed above are authorized under a signed CLA.

@ocelotl
Copy link
Contributor

ocelotl commented Jul 7, 2022

@TheAnshul756 is this PR ready for review?

If yes, please remove the [WIP] from its name.
If not, please convert it to draft.

@TheAnshul756 TheAnshul756 marked this pull request as draft July 8, 2022 06:34
@TheAnshul756 TheAnshul756 marked this pull request as ready for review July 8, 2022 09:08
@TheAnshul756 TheAnshul756 changed the title Metrics instrumentation flask [WIP] Metrics instrumentation flask Jul 8, 2022
@srikanthccv
Copy link
Member

Good part of this is same as in WSGI instrumentation. I am wondering if we can base all wsgi compatible frameworks on it.

@TheAnshul756
Copy link
Contributor Author

Yes I think we can implement all the wsgi based frameworks taking inspiration from WSGI Instrumentation but I don't think we can have a common function to do this job in all frameworks. Correct me if I misunderstood what you are saying.

@srikanthccv
Copy link
Member

Many of WSGI complaint instrumentations (flask, falcon, pyramid, etc..) already take dep on opentelemetry-instrumentation-wsgi and use it.

@srikanthccv
Copy link
Member

And when both WSGI and instrumentation are enabled they produce idetical metrics with very little changes in the time series.

@TheAnshul756
Copy link
Contributor Author

If in case we are only using specific framework instrumentation and not wsgi instrumentation then we have to have metric instrumentation implemented in the that framework, because although the frameworks take dependency on wsgi but they do not use it for metric or trace instrumentation. They have their own implementation.

@srikanthccv
Copy link
Member

because although the frameworks take dependency on wsgi but they do not use it for metric or trace instrumentation.

What do you mean? Why else do you think we are taking dep on wsgi instrumentation. Because we use parts of it.

They have their own implementation.

Right, I am not saying they will not have any implementation on their own. But I don't see the reason each of them maintaining their own HTTP server semantic attributes, parsing the request response data from wsgi environ and filtering based on values and testing exact same thing unless you add some new tests (which is not the case with this PR).

@@ -290,7 +307,21 @@ def _teardown_request(exc):
# a way that doesn't run `before_request`, like when it is created
# with `app.test_request_context`.
return
start = flask.request.environ.get(_ENVIRON_STARTTIME_KEY)
duration = max(round((default_timer() - start) * 1000), 0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't all this logic be in _start_response?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, that is also a possible way. But we are starting span in before_request and ending it in teardown_request so I did the same for metrics too. Also here it says that, it is discouraged, I'm also not sure what to prefer. If doing it in start_response is more acceptable then I can make the changes.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, please do. That comment is relevant to only tracing API and they are referring to update_name call on span once it is started. It is discouraged because sampling decision would have already been made based on it and such edge cases.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done.

@@ -290,7 +307,21 @@ def _teardown_request(exc):
# a way that doesn't run `before_request`, like when it is created
# with `app.test_request_context`.
return
start = flask.request.environ.get(_ENVIRON_STARTTIME_KEY)
duration = max(round((default_timer() - start) * 1000), 0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, please do. That comment is relevant to only tracing API and they are referring to update_name call on span once it is started. It is discouraged because sampling decision would have already been made based on it and such edge cases.

@@ -326,14 +326,30 @@ def collect_custom_response_headers_attributes(response_headers):
return attributes


def _parse_status_code(resp_status):
def parse_status_code(resp_status):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would still keep this and other functions private. They are not intended to be user accessible. If there is really need we can always change from private to public but not vice versa.

Copy link
Member

@srikanthccv srikanthccv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

blocking to prevent accidental merge to make sure tests are updated

Copy link
Member

@srikanthccv srikanthccv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does uninstrument stop producing the measurements? Please add some tests for instruement/uninstrument and other HTTP verbs.

@TheAnshul756
Copy link
Contributor Author

Does uninstrument stop producing the measurements? Please add some tests for instruement/uninstrument and other HTTP verbs.

@srikanthccv I've added test for uninstrument.

What do you mean by other HTTP verbs?

@srikanthccv
Copy link
Member

I meant something other than get. Could be something like client.{post,delete} etc...

@TheAnshul756
Copy link
Contributor Author

I meant something other than get. Could be something like client.{post,delete} etc...

I've changed get to post in one of the test.

I'm just curious why it was needed? Because I think instrumentation logic is written independent of the type of request so if it is working fine with get then it should work fine with post too? Correct me if I'm wrong.

@srikanthccv
Copy link
Member

What I really wanted is a something like post request that sends form data and see the duration results.

@lzchen
Copy link
Contributor

lzchen commented Aug 2, 2022

Please update the metrics support for this instrumentation here

@srikanthccv
Copy link
Member

Or add _supports_metrics = True in package.py and run tox -e generate.

@srikanthccv srikanthccv enabled auto-merge (squash) August 5, 2022 07:34
@srikanthccv srikanthccv merged commit ebe6d18 into open-telemetry:main Aug 5, 2022
shalevr added a commit to shalevr/opentelemetry-python-contrib that referenced this pull request Aug 8, 2022
…hub.com/shalevr/opentelemetry-python-contrib into feature/Metrics-instrumentation-urllib3

* 'feature/Metrics-instrumentation-urllib3' of https://github.com/shalevr/opentelemetry-python-contrib:
  Metric instrumentation asgi (open-telemetry#1197)
  Metrics instrumentation flask (open-telemetry#1186)
  Adding sqlalchemy native tags in sqlalchemy commenter (open-telemetry#1206)
shalevr added a commit to shalevr/opentelemetry-python-contrib that referenced this pull request Aug 30, 2022
* main:
  Codespell ci (open-telemetry#1237)
  aiohttp-client: Fix producing additional spans with each newly created ClientSession (open-telemetry#1246)
  Remove support for 3.6 (open-telemetry#853)
  Added the Licence and Manifest file
  Restore metrics in django (open-telemetry#1208)
  fix typo in example codes (open-telemetry#1240)
  boto3sqs: Make propagation compatible with other instrumentations and add 'messaging.url' span attribute (open-telemetry#1234)
  Release 1.12.0-0.33b0 (open-telemetry#1223)
  Fix Flask instrumentation doc link (open-telemetry#1216)
  Feature/metrics instrumentation urllib3 (open-telemetry#1198)
  Metric instrumentation asgi (open-telemetry#1197)
  Metrics instrumentation flask (open-telemetry#1186)
  Adding sqlalchemy native tags in sqlalchemy commenter (open-telemetry#1206)
  ci: fix docs workflow failure (open-telemetry#1211)
  Add psycopg2 native tags to sqlcommenter (open-telemetry#1203)
  SQLCommenter semicolon bug fix (open-telemetry#1200)
  Sync with sdk setup from setUpClass to setUp (open-telemetry#1193)

# Conflicts:
#	CHANGELOG.md
#	instrumentation/opentelemetry-instrumentation-tornado/src/opentelemetry/instrumentation/tornado/__init__.py
#	instrumentation/opentelemetry-instrumentation-tornado/src/opentelemetry/instrumentation/tornado/client.py
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Metrics instrumentation flask
5 participants