You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The collision detection process often produces duplicate values; this is expected because each object can produce up to four (2D) or eight (3D) distinct indices, and each one must be tested independently. When two objects collide, and at least one has multiple indices, multiple potential collisions may be emitted.
To avoid unexpected results from the user's perspective (re-running collision handlers unnecessarily), duplicates are removed from the results. This is currently implemented by inserting results into a HashSet before returning them from detect_collisions.
This process is the current limiting factor for performance, taking significantly more time than either index calculation or actual collision detection for the example application.
Alternatives already tested:
Using a Vec, calling sort_unstable() followed by dedup() — this is the slowest option
Using a Vec, calling rayon's par_sort_unstable() with dedup() — almost as fast as HashSet, but requiring more CPU time
Different hashers — I've tested the default std:: hasher, as well as hashers from fnv, rustc_hash, and murmurhash64. Of these, rustc_hash is the fastest (and current default) followed by fnv, with std being the slowest.
Either continue optimizing this or provide an option to the user to obtain potential collision results with duplicates included (so they might implement application-specific solutions).
The text was updated successfully, but these errors were encountered:
Additional note: this may be less of an issue in real applications; the example is meant to be a sort of "stress test" and has 1500 dynamic entities all in constant collision.
In real applications, there are likely to be fewer collisions (per object) at any given time and the duplicate removal pass is likely to operate on a much smaller set of data.
The collision detection process often produces duplicate values; this is expected because each object can produce up to four (2D) or eight (3D) distinct indices, and each one must be tested independently. When two objects collide, and at least one has multiple indices, multiple potential collisions may be emitted.
To avoid unexpected results from the user's perspective (re-running collision handlers unnecessarily), duplicates are removed from the results. This is currently implemented by inserting results into a
HashSet
before returning them fromdetect_collisions
.This process is the current limiting factor for performance, taking significantly more time than either index calculation or actual collision detection for the example application.
Alternatives already tested:
Vec
, callingsort_unstable()
followed bydedup()
— this is the slowest optionVec
, calling rayon'spar_sort_unstable()
withdedup()
— almost as fast asHashSet
, but requiring more CPU timestd::
hasher, as well as hashers fromfnv
,rustc_hash
, andmurmurhash64
. Of these,rustc_hash
is the fastest (and current default) followed byfnv
, withstd
being the slowest.Either continue optimizing this or provide an option to the user to obtain potential collision results with duplicates included (so they might implement application-specific solutions).
The text was updated successfully, but these errors were encountered: