Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Changelog
ecs_run_pipeline
to execute multi-threaded systems on the worker threads.run_pipeline_multithreaded
andrun_pipeline_multithreaded_tasks
.ecs_set_pipeline
ecs_progress
to block forever (Report on Discord)TODO
system_time_total
is calculatedNotes
system_time_total
is measured with multi-threaded systems is not accurate as it only measures the work the main thread does, but this may not be accurate as a worker thread could have more or less work. It would be straightforward with the changes in this branch to instead measure the system time including thread synchronization. This would makesystem_time_total
the wall time of the system.Details
Multi-threaded
ecs_run_pipeline
ecs_run_pipeline
to callflecs_workers_progress
and to more closely mirrorecs_progress
with support for task threads.ecs_worker_state_t
and instead passecs_pipeline_state_t*
via theecs_stage_t
allowing for it to be dynamically updated.flecs_run_pipeline
into a new functionflecs_run_pipeline_ops
.flecs_worker
to now callflecs_run_pipeline_ops
so it only operates on a specific pipeline operation, making the synchronization with the main thread clearer.flecs_run_pipeline
to now only be the main-thread operations handling updating the pipeline, updating the pipeline operation, waking works, synchronizing workers and also callsflecs_run_pipeline_ops
to execute the pipeline operation for the main thread.flecs_worker_begin
,flecs_worker_end
,flecs_worker_sync
,flecs_is_multithreaded
andflecs_is_main_thread
as they were no longer needed anymore.flecs_signal_workers
andflecs_wait_for_sync
fromworker.c
topipeline.c
inpipeline.h
.ecs_set_threads
fromecs_set_pipeline
now that the workers can operate on any pipeline dynamically so there is no need to restart them.Some info on how this changes worker synchronization.
I ran both
master
and the branch for this PR with log level 3 enabled in a release build to compare how the workers operate.Here in the following diff, you can see the functional difference.
In this diff, you can see a few changes.
SysB
andSysC
, and then also lower down forSysF
the worker threads are no longer signalled since the operation is not marked asmulti_threaded
. This means we can avoid needing to synchronize on the worker threads when not needed.SysD
andSysE
there are new log messages forworker %d: run
, this is because the workers now go back to their main loop after executing one operation, rather than synchronizing internally inflecs_run_pipeline
.info: | merge
), we no longer wake the worker threads an additional time. This is possible since the workers are staying in their main loop when waiting for the next operation to execute. This effectively removes an unneeded sync point.Notes about log collection:
flecs_log_msg
to lock while writing to make the logging to be more stableecs_run_intern
so its clearer what it is when viewed next to theworker %d: run
message:eg.
I've attached the full logs in case someone wants to look at them.
master.txt
multithreaded-custom-piepline.txt
Deadlock when using
ecs_run_pipeline
with worker threads enabled (Report on Discord)Fixed a bug in
flecs_worker_begin
where if an empty pipeline was run it would causeecs_progress
to block foreverflecs_worker_begin
signalling the workers to start but thenflecs_run_pipeline
would immediately return if there are no operations to perform, meaning the worker threads were never synchronized before exiting.flecs/src/addons/pipeline/pipeline.c
Lines 531 to 533 in d24a9e1
Removed unneeded worker signal and sync at end of pipeline
While working on this I noticed that when a pipeline is run across the worker threads (eg.
ecs_progress
) at the end of the pipeline when no more operations existflecs_worker_sync
would callflecs_worker_begin
which would then signal the workers, only for the loop to exit and thenflecs_worker_end
to immediately be called.