Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix potential race conditions and crashes during shutdown #2525

Closed
wants to merge 2 commits into from
Closed

Fix potential race conditions and crashes during shutdown #2525

wants to merge 2 commits into from

Conversation

uklotzde
Copy link
Contributor

Lessons learned from PR #2508: Concurrent guessing of cover infos may not have finished before shutting down. Those pending tasks may cause crashes when trying to access resources that have already disappeared.

Copy link
Member

@Holzhaus Holzhaus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Maybe it one other dev have a look at this before we merge, just to make sure.

@daschuer
Copy link
Member

daschuer commented Mar 1, 2020

I think it is not a good idea to do this but force exit here on the global instance of threads.

The creator of the task should be responsible to remove unscheduled tasks on exit, not the tackcollecrionmanager.

We may move this to the end of main as a last resort and assert that he cue is empty.
This will discover programming errors.

If we expect stalled tasks, this is already handled by the default expirery timout of 30s.

@uklotzde
Copy link
Contributor Author

uklotzde commented Mar 1, 2020

If we don't wait for the completion of tasks before destroying the track cache those tasks may cause crashes while holding track pointers or accessing the destroyed singletons like the CoverArtCache. Please explain why such a behavior could be desired.

We cannot guarantee that all tasks have finished when exiting Mixxx, even if they only run for a fraction of a second.

@uklotzde
Copy link
Contributor Author

uklotzde commented Mar 1, 2020

The global thread pool might be used by anyone. It is not possible to cancel selected tasks once they have been scheduled.

@daschuer
Copy link
Member

daschuer commented Mar 1, 2020

Please explain why such a behavior could be desired.

Of cause this behavior is not desired. It s just the wrong solution.

Maybe we need a separate counter somehow. I think in such a multithread evironment the issue is not only related to the thread pool.

The global thread pool might be used by anyone.

That is one of the main issues for this solution.

@uklotzde
Copy link
Contributor Author

uklotzde commented Mar 1, 2020

Sorry, but I don't understand those arguments. I struggled hard with exactly this issue that caused spurious test failures and that may also appear on application shutdown. If you are using QtConcurrent you need to ensure that shared resources like singletons don't disappear while tasks are running. Usually they don't, but on shutdown they do.

@uklotzde
Copy link
Contributor Author

uklotzde commented Mar 1, 2020

I won't write a custom concurrent task framework to solve an edge case.

@uklotzde
Copy link
Contributor Author

uklotzde commented Mar 2, 2020

I understand that TrackCollectionManager is probably not the right place. But this is due to the fact that the library scanner is hosted there (which I already pointed out is unfortunate for other reasons). We first have to stop the library scanner, otherwise new tasks may appear.

@daschuer
Copy link
Member

daschuer commented Mar 2, 2020

We cannot guarantee that all using threads of the track collection are managed by the thread pool. This implicit requirement might be suoorising in future changes.
Just suspend scheduled tasks here may also create inconsistencys in the users code.

I think the best way forward would be to introduce a proper recources handling with a usage counter or locking.

I like to have a deeper look into this. Can you point me to the root cause of the crash?
Which line is crashing? Do you have a backtrace?
Can we forth a crash by scheduling a new task just before desructing the track collection manager?

@uklotzde
Copy link
Contributor Author

uklotzde commented Mar 2, 2020

No, I don't have anything at hand.

Closing this. Just keep in mind that we still have issues deeply hidden in the code base.

@uklotzde uklotzde closed this Mar 2, 2020
@uklotzde uklotzde deleted the shutdown_threadpool branch March 2, 2020 07:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants