You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be great to have median metric alongside with average or have a choice. Many benchmarks run warm-up trials and then a lot of main trials of the same workload. Having median is useful to exclude spikes which usually happen during warm-up runs.
I refer to the Average metric from an example report below:
=== API Timing Results: ===
Total Execution Time (ns): 418056422
Total API Time (ns): 407283268
Function, Calls, Time (ns), Time (%), Average (ns), Min (ns), Max (ns)
zeCommandQueueSynchronize, 4, 182529847, 44.82, 45632461, 45271728, 46364532
zeModuleCreate, 1, 111687828, 27.42, 111687828, 111687828, 111687828
zeCommandQueueExecuteCommandLists, 4, 108593458, 26.66, 27148364, 1756304, 102803947
zeCommandListAppendMemoryCopy, 12, 2493748, 0.61, 207812, 62061, 1037087
The text was updated successfully, but these errors were encountered:
Hi,
It would be great to have median metric alongside with average or have a choice. Many benchmarks run warm-up trials and then a lot of main trials of the same workload. Having median is useful to exclude spikes which usually happen during warm-up runs.
I refer to the Average metric from an example report below:
The text was updated successfully, but these errors were encountered: