Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tests: increase timeout for unit tests on focal #14876

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

maykathm
Copy link
Contributor

No description provided.

@maykathm maykathm marked this pull request as ready for review December 18, 2024 16:28
Copy link
Member

@olivercalder olivercalder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for addressing this! I'm a bit confused about the relationship between settle timeout and overall test timeout.

@@ -101,7 +101,7 @@ func (s *snapmgrBaseTest) settle(c *C) {
s.state.Unlock()
defer s.state.Lock()

err := s.o.Settle(testutil.HostScaledTimeout(5 * time.Second))
err := s.o.Settle(testutil.HostScaledTimeout(15 * time.Second))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change seems unrelated to the TIMEOUT changes below, as this is 5->15 seconds for settle to converge, while below is 5->15 minutes for overall test run. Does the overall test runtime scale with the settle runtime? Or does HostScaledTimeout scale with the TIMEOUT environment variable?

I think this is what's confusing me: if total test runtime scales with settle timeout, and settle timeout is being increased universally, shouldn't total test runtime also be increased universally? And if not, why increase the total test runtime for just focal?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excellent questions. Their identical numbers were actually a coincidence. I was experimenting with how much time focal (because this is only failing on the google focal machine) required to pass its tests. When I increased the settle time, rather than die due to not converging, the tests would reach the overall unit test timeout. So I keep increasing each of those two values by five until the tests were able to pass. It looks like I overshot the settle though. I just dialed that back by 5 seconds and focal is still happy.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wasn't sure if I should increase the overall time for everybody since the failure only occurs on focal. I am generally very much so in favor of removing ifs. So if you think it would be better, I am happy to remove that condition.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm I'm torn about increasing the unit test timeout universally. On the one hand, unit tests ought to take the same amount of time on all systems (else they're not very "unit") (also, why is it taking focal 3x as long??), but on the other hand, if there's a deadlock in a unit test, the runner will sit spinning as long as the timeout lets it, so it could just be burning money. Deadlocks shouldn't be happening though, and local testing before pushing would find them, so I think I'm in favor of universally increasing timeouts, especially since this is for running unit tests inside spread, where runners may be glacial.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the issue has to do with the machine itself (Perhaps network connectivity? Perhaps the CPU is overloaded?). I tried using various focal VMs (both lxd and qemu) and had no issues whatsoever; the unit tests completed in roughly the same amount of time as it takes on my native system.

It would be really nice to not mix the particularities of our CI environment with the codified correct behavior of snapd. That does muddy the water and makes it hard to understand why tests do what they do. It would be awesome to extract these kind of things into configuration files or something that are specific to the backend.

Copy link

codecov bot commented Dec 19, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 78.30%. Comparing base (24a0034) to head (37626be).
Report is 73 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master   #14876      +/-   ##
==========================================
+ Coverage   78.20%   78.30%   +0.10%     
==========================================
  Files        1151     1153       +2     
  Lines      151396   152532    +1136     
==========================================
+ Hits       118402   119444    +1042     
- Misses      25662    25721      +59     
- Partials     7332     7367      +35     
Flag Coverage Δ
unittests 78.30% <ø> (+0.10%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Member

@olivercalder olivercalder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great, thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants