-
Notifications
You must be signed in to change notification settings - Fork 23
Issues: intel/torch-xpu-ops
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[E2E] Torchbench models got fail_accuracy
Accuracy
E2E
torchbench
#1222
opened Dec 26, 2024 by
mengfei25
[E2E] Torchbench torchrec_dlrm amp inference accuracy failed
Accuracy
amp_bf16
amp_fp16
E2E
inference
torchbench
#1221
opened Dec 26, 2024 by
mengfei25
[E2E] Torchbench models load weight got failed
Accuracy
E2E
torchbench
#1220
opened Dec 26, 2024 by
mengfei25
[E2E] Torchbench models ImportError cached_download from huggingface_hub
E2E
torchbench
#1219
opened Dec 26, 2024 by
mengfei25
[E2E] Huggingface DebertaV2ForQuestionAnswering got fail_accuracy
Accuracy
bfloat16
E2E
float16
float32
huggingface
training
#1216
opened Dec 26, 2024 by
mengfei25
gets nan with complex dtype
client
module: dependency bug
Problem is not caused by us, but caused by the library we use
#1195
opened Dec 23, 2024 by
Stonepia
UT cases which failed on rolling driver and passed on lts driver.
ut_triaged
#1193
opened Dec 23, 2024 by
PenghuiCheng
torch.nextafter has an incorrect result for bf16 on XPU
bug
Something isn't working
#1169
opened Dec 16, 2024 by
guangyey
torch._standard_gamma() has accuracy gap compared to scipy and torch.cpu
#1163
opened Dec 12, 2024 by
daisyden
What is the expected result of float64 div when divisor and dividend are the same?
#1160
opened Dec 11, 2024 by
daisyden
[LNL Windows][Test by CD Nightly Wheels] hugging face model - DebertaForQuestionAnswering && DebertaV2ForMaskedLM failed with RuntimeError: value cannot be converted to type at::BFloat16 without overflow
client
E2E
module: dependency bug
Problem is not caused by us, but caused by the library we use
ut_triaged
xpu: implement aten::_thnn_fused_lstm_cell for XPU backend #141539
#1157
opened Dec 11, 2024 by
yinghu5
softshrink is expected to return nan when the input is nan on ARC
#1152
opened Dec 9, 2024 by
daisyden
Previous Next
ProTip!
Follow long discussions with comments:>50.