Skip to content

Commit e67eeef

Browse files
authored
[torchlib] Simplify linalg_vector_norm to remove the redundant Abs (#2570)
This happens in some of the LORA models. When we use ReduceL1/ReduceL2 or when ord is an even number, we don't need to take Abs of the input Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com> --------- Signed-off-by: Justin Chu <justinchuby@users.noreply.github.com>
1 parent f54cf47 commit e67eeef

File tree

1 file changed

+5
-1
lines changed
  • onnxscript/function_libs/torch_lib/ops

1 file changed

+5
-1
lines changed

onnxscript/function_libs/torch_lib/ops/linalg.py

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -330,8 +330,9 @@ def aten_linalg_vector_norm(
330330
keepdim = False
331331
else:
332332
dim = op.Reshape(dim, op.Constant(value_ints=[-1]))
333-
self = op.Abs(self)
333+
334334
if math.isinf(ord):
335+
self = op.Abs(self)
335336
if ord > 0:
336337
return op.ReduceMax(self, dim, keepdims=keepdim)
337338
else:
@@ -345,6 +346,9 @@ def aten_linalg_vector_norm(
345346
elif ord == 2.0:
346347
return op.ReduceL2(self, dim, keepdims=keepdim)
347348
else:
349+
if ord < 0 or ord % 2 != 0:
350+
# Not an even integer (could be odd, fractional or negative), use Abs
351+
self = op.Abs(self)
348352
self_pow = op.Pow(self, ord)
349353
exp = op.CastLike(1 / ord, self)
350354
return op.Pow(op.ReduceSum(self_pow, dim, keepdims=keepdim), exp)

0 commit comments

Comments
 (0)