You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am currently implementing a program that utilizes multiple instances of libvmaf asynchronously.
Initializing each instance by setting VmafCudaConfiguration::cu_ctx = cuCtxCreate(...) poses no issues.
However, when all instances are initialized with the same CUDA context, problems arise during the "vmaf_score_at_index" calls, resulting
in incorrect VMAF scores or NaN values being output intermittently.
I've checked my code from various angles just in case there might be a problem, but everything seems fine. It is likely that the issue lies with libvmaf.
Moreover, if each libvmaf instance does not share the same CUDA context, there is a performance degradation issue.
I believe this is the same or similar issue to #1180. @kylophone FYI, in case something needs to be fixed
I believe my situation differs from what you've mentioned. Although I'm not explicitly mentioned, I called 'vmaf_score_at_index' with a exact two pictures delay for every cases.
Therefore, no log messages were output from any vmaf instancese. Loglevel has been set to DEBUG
I am currently implementing a program that utilizes multiple instances of libvmaf asynchronously.
Initializing each instance by setting VmafCudaConfiguration::cu_ctx = cuCtxCreate(...) poses no issues.
However, when all instances are initialized with the same CUDA context, problems arise during the "vmaf_score_at_index" calls, resulting
in incorrect VMAF scores or NaN values being output intermittently.
I've checked my code from various angles just in case there might be a problem, but everything seems fine. It is likely that the issue lies with libvmaf.
Moreover, if each libvmaf instance does not share the same CUDA context, there is a performance degradation issue.
this is my one of test result: link
tested VMAF version was 3.0.0
The text was updated successfully, but these errors were encountered: