Skip to content

Conversation

@xadupre
Copy link
Member

@xadupre xadupre commented Oct 17, 2025

From #2606.

@codecov
Copy link

codecov bot commented Oct 17, 2025

Codecov Report

❌ Patch coverage is 75.26882% with 23 lines in your changes missing coverage. Please review.
✅ Project coverage is 70.42%. Comparing base (8a94ad6) to head (37a861f).
⚠️ Report is 5 commits behind head on main.
✅ All tests successful. No failed tests found.

Files with missing lines Patch % Lines
onnxscript/function_libs/torch_lib/ops/core.py 75.26% 13 Missing and 10 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2641      +/-   ##
==========================================
- Coverage   70.46%   70.42%   -0.04%     
==========================================
  Files         224      224              
  Lines       26572    26678     +106     
  Branches     2637     2658      +21     
==========================================
+ Hits        18723    18789      +66     
- Misses       6928     6956      +28     
- Partials      921      933      +12     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@xadupre xadupre marked this pull request as ready for review October 24, 2025 15:40
@xadupre xadupre enabled auto-merge (squash) October 24, 2025 15:40
isinstance(index, torch.onnx._internal.exporter._tensors.SymbolicTensor) # pylint: disable=protected-access
for index in indices
)
and len(values.shape) == 1
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is this condition for? I am just trying to understand the assumptions/conditions for this special case.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ONNX does not have similar operator than index_put. So the goal was to convert different cases in different ways. Finding a generic way to convert all the cases may end up with inefficient conversions. So I chose to filter out some cases where I know a simple conversion. At the end of the function, you still have an implementation working for everything left. About this condition, I don't know when SymbolicTensor appears, I just know it appears in some models (Qwen) and in that case, the conversion I implemented works. If the generic case is difficult to implement, let's go through the list of all cases we face. That's the logic I followed.

@gramalingam
Copy link
Collaborator

Adding some pointers/info for my own clarification:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Development

Successfully merging this pull request may close these issues.

3 participants