Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix/prometheusexporter Shutdown HTTP server #35465

Conversation

Argannor
Copy link
Contributor

@Argannor Argannor commented Sep 27, 2024

Description: Shutdown the http.Server instance on exporter shutdown

Link to tracking Issue: #35464

Testing: Manual testing. I included this version of the exporter in our internal distribution and deployed it to verify the metrics no longer go stale after receiving a NOHUP signal.

Documentation: n/a

@Argannor Argannor requested review from dashpole and a team as code owners September 27, 2024 14:57
Copy link

linux-foundation-easycla bot commented Sep 27, 2024

CLA Signed

The committers listed above are authorized under a signed CLA.

@Argannor Argannor force-pushed the fix/prometheusexporter-shutdown-server branch from 00e689f to 52eee00 Compare September 27, 2024 15:03
@Argannor
Copy link
Contributor Author

Hey @Aneurysm9, @songy23, sorry for pinging you directly. Do you mind having a look at this or advising me what is needed from me to go ahead with this PR?

@Argannor Argannor force-pushed the fix/prometheusexporter-shutdown-server branch from 1c1ef8f to 6efa063 Compare October 11, 2024 13:59
@dashpole
Copy link
Contributor

unit tests seem to fail after the change.

@Argannor
Copy link
Contributor Author

Yeah, I just took a look at the teste that are failing, and I'm not sure what's happening there:

	t.Cleanup(func() {
		require.NoError(t, exp.Shutdown(context.Background()))
		// trigger a get so that the server cleans up our keepalive socket
		var resp *http.Response
		resp, err = httpClient.Get("https://localhost:7777/metrics")
		require.NoError(t, err)
		require.NoError(t, resp.Body.Close())
	})

Is this a workaround for the fact that the server did not shutdown correctly previously, and is now failing, because after the shutdown the server is shutdown?

@dashpole
Copy link
Contributor

It might have been. Try removing the httpClient.Get call from cleanup and see if tests pass.

These statements seem to be a workaround for the server not being closed correctly, with the previous changes this should no longer be necessary
@Argannor Argannor force-pushed the fix/prometheusexporter-shutdown-server branch from 381f74f to 7b7535f Compare October 14, 2024 09:26
@Argannor
Copy link
Contributor Author

After removing the statements the tests passed locally for me. Sorry for not running them in the first place, I had some troubles running them using make... :/

@Argannor
Copy link
Contributor Author

Is there something you need me to do?

@dashpole dashpole added the ready to merge Code review completed; ready to merge by maintainers label Oct 24, 2024
@andrzej-stencel andrzej-stencel merged commit a30db62 into open-telemetry:main Oct 30, 2024
168 checks passed
@github-actions github-actions bot added this to the next release milestone Oct 30, 2024
zzhlogin pushed a commit to zzhlogin/opentelemetry-collector-contrib-aws that referenced this pull request Nov 12, 2024
**Description:** Shutdown the http.Server instance on exporter shutdown

**Link to tracking Issue:** open-telemetry#35464 

**Testing:** Manual testing. I included this version of the exporter in
our internal distribution and deployed it to verify the metrics no
longer go stale after receiving a `NOHUP` signal.

**Documentation:** n/a

---------

Co-authored-by: David Ashpole <dashpole@google.com>
sbylica-splunk pushed a commit to sbylica-splunk/opentelemetry-collector-contrib that referenced this pull request Dec 17, 2024
**Description:** Shutdown the http.Server instance on exporter shutdown

**Link to tracking Issue:** open-telemetry#35464 

**Testing:** Manual testing. I included this version of the exporter in
our internal distribution and deployed it to verify the metrics no
longer go stale after receiving a `NOHUP` signal.

**Documentation:** n/a

---------

Co-authored-by: David Ashpole <dashpole@google.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
exporter/prometheus ready to merge Code review completed; ready to merge by maintainers
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants