Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update master to upstream #144

Closed
wants to merge 624 commits into from
Closed

Update master to upstream #144

wants to merge 624 commits into from

Conversation

csarofeen
Copy link
Owner

No description provided.

zou3519 and others added 30 commits June 24, 2020 08:12
Summary:
Pull Request resolved: pytorch#40171

It checks that all of the bdims in BatchedTensorImpl are sorted in
order of ascending `level`.

Test Plan: - Check that nothing breaks in `./build/bin/vmap_test`

Differential Revision: D22102077

Pulled By: zou3519

fbshipit-source-id: 094b7abc6c65208437f0f51a0d0083091912decc
Summary:
Pull Request resolved: pytorch#40172

This PR introduces the initial vmap frontend API. It has the following
limitations that we can resolve in the future:
- the inputs must be a flat list of tensors
- the outputs must be a flat list of tensors
- in_dims = 0 (so we always vmap over dim 0 of input tensors)
- out_dims = 0 (so the returned tensors have their vmap dim appear at
dim 0)
- Coverage limited to operations that have batching rules implemented
(torch.mul, torch.sum, torch.expand).

There are some other semantic limitations (like not being able to handle
mutation, aside from pytorch operations that perform mutation) that will
be documented in the future.

I wanted to introduce the API before adding a slow fallback for the
coverage so that we can test future batching rules (and coverage) via
the python API to avoid verbosity in C++-land.

The way vmap works is that `vmap(func)(inputs)` wraps all Tensor inputs
to be batched in BatchedTensors, sends those into func, and then unwraps
the output BatchedTensors. Operations on BatchedTensors perform the batched
operations that the user is asking for. When performing nested vmaps,
each nested vmap adds a batch dimension upon entry and removes a batch
dimension on exit.

Coming up in the near future:
- Support for non-zero in_dims and out_dims
- docstring for vmap
- slow fallback for operators that do not have a batching rule
implemented.

Test Plan: - `pytest test/test_vmap.py -v`

Differential Revision: D22102076

Pulled By: zou3519

fbshipit-source-id: b119f0a8a3a3b1717c92dbbd180dfb1618295563
Summary:
Partial support for slicing of Sequential containers.

- works around missing Sequential slice functionality
   by converting to tuple
- only supports iteration of resulting tuple values,
   not direct call() on the sliced sequential
Pull Request resolved: pytorch#40445

Differential Revision: D22192469

Pulled By: wconstab

fbshipit-source-id: 61c85deda2d58f6e3bea2f1fa1d5d5dde568b9b5
…36786)

Summary:
Should close pytorch#35810.

I decided to keep sparse handling on the Python side for clarity, although it could be moved to the C++ side (into `_amp_non_finite_check_and_unscale_`) without much trouble.

For non-fp16 sparse grads the logic is simple (call `_amp_non_finite_check_and_unscale_` on `grad._values()`) instead of `grad` itself.  At least I hope it's that easy.

For fp16 sparse grads, it's tricker.  Sparse tensors can be uncoalesced.  From the [Note](https://pytorch.org/docs/master/sparse.html#torch.sparse.FloatTensor):
> Our sparse tensor format permits uncoalesced sparse tensors, where there may be duplicate coordinates in the indices; in this case, the interpretation is that the value at that index is the sum of all duplicate value entries.

An uncoalesced scaled fp16 grad may have values at duplicate coordinates that are all finite but large, such that adding them to make the coalesced version WOULD cause overflows.**  If I checked `_values()` on the uncoalesced version, it might not report overflows, but I think it should.

So, if the grad is sparse, fp16, and uncoalesced, I still call `_amp_non_finite_check_and_unscale_` to unscale `grad._values()` in-place, but I also double-check the coalesced version by calling a second `_amp_non_finite_check_and_unscale_` on `grad.coalesce()._values()`.  `coalesce()` is out-of-place, so this call doesn't redundantly affect `grad._values()`, but it does have the power to populate the same `found_inf` tensor.  The `is_coalesced()` check and `coalesce()` probably aren't great for performance, but if someone needs a giant embedding table in FP16, they're better than nothing and memorywise, they'll only create a copy of nnz gradient values+indices, which is still way better than changing the whole table to FP32.

An `unscale` variant with liberty to create unscaled grads out-of-place, and replace `param.grad` instead of writing through it, could get away with just one `_amp_non_finite_check_and_unscale_`.  It could say `coalesced = grad.coalesced()`, do only the stronger `_amp_non_finite_check_and_unscale_` on `coalesced._values()`, and set `param.grad = coalesced`.  I could even avoid replacing `param.grad` itself by going one level deeper and setting `param.grad`'s indices and values to `coalesced`'s, but that seems brittle and still isn't truly "in place".

** you could whiteboard an uncoalesced fp32 grad with the same property, but fp32's range is big enough that I don't think it's realistic.
Pull Request resolved: pytorch#36786

Reviewed By: ezyang

Differential Revision: D22202832

Pulled By: ngimel

fbshipit-source-id: b70961a4b6fc3a4c1882f65e7f34874066435735
…ytorch#40146)

Summary:
Currently, even if USE_OPENMP is turned off, ATEN_THEADING can still use OpenMP. This commit fixes it.
Pull Request resolved: pytorch#40146

Reviewed By: ezyang

Differential Revision: D22208758

Pulled By: pbelevich

fbshipit-source-id: 0866c9bb9b3b5b99d586aed176eb0fbe177efa4a
…ch#40494)

Summary:
Pull Request resolved: pytorch#40494

Resubmit the diff because D22124313 (pytorch@1ec4337) was reverted due to CI test failures
Added the int8_gen_quant_params.cc to CMakeList.txt to fix the CI failures

Test Plan: buck test caffe2/caffe2/quantization/server:

Reviewed By: hx89

Differential Revision: D22204244

fbshipit-source-id: a2c8b668f199cc5b0c5894086f554f7c459b1ad7
…ytorch#40115)

Summary:
Pull Request resolved: pytorch#40115

Closes pytorch#37790
Closes pytorch#37944

A user may wish to run DDP's forward + backwards step under a non-default CUDA stream such as those created by `with torch.cuda.Stream(stream)`. In this case, the user should be responsible for synchronizing events on this stream with other streams used in the program (per the documentation at https://pytorch.org/docs/stable/notes/cuda.html#cuda-semantics), but currently DDP has a bug which causes DDP under non-default streams to fail.

If a user does the following:
```
model = DDP(...)
loss = model(inptut).sum()
loss.backward()
grad = model.module.weight.grad()
average = dist.all_reduce(grad)
```

There is a chance that `average` and `grad` will not be equal. This is because the CUDA kernels corresponding to the  `all_reduce` call may run before `loss.backward()`'s kernels are finished. Specifically, in DDP we copy the allreduced gradients back to the model parameter gradients in an autograd engine callback, but this callback runs on the default stream. Note that this can also be fixed by the application synchronizing on the current stream, although this should not be expected, since the application is not using the current stream at all.

This PR fixes the issue by passing the current stream into DDP's callback.

Tested by adding a UT `test_DistributedDataParallel_non_default_stream` that fails without this PR
ghstack-source-id: 106481208

Differential Revision: D22073353

fbshipit-source-id: 70da9b44e5f546ff8b6d8c42022ecc846dff033e
Summary: Pull Request resolved: pytorch#40506

Test Plan: Imported from OSS

Differential Revision: D22208965

Pulled By: mrshenli

fbshipit-source-id: 7d27b60e2c09e641b4eeb1c89d9f9917c4e72e52
Summary:
BC NOTE:

This change makes it so modules saved with torch.jit.save in PyTorch 1.6 can be loaded by previous versions of PyTorch unless they use torch.div or (soon) torch.full. It also lets tensors saved using torch.save be loaded by previous versions. So this is the opposite of BC-breaking, but I'm using that label to highlight this issue since we don't have a "BC-improving" label.

PR NOTE:
When an operator's semantics change in PyTorch we want to do two things:

1) Preserve the semantics of older serialized Torchscript programs that use the operator
2) Ensure the new semantics are respected

Historically, this meant writing a Versioned Symbol that would remap older versions of the operator into current PyTorch code (1), and bumping the produced file format version (2). Unfortunately, bumping the produced file format version is a nuclear option for ensuring semantics are respected, since it also prevents older versions of PyTorch from loading anything (even tensors!) from newer versions.

Dynamic versioning addresses the nuclear consequences of bumping the produced file format version by only bumping it when necessary. That is, when an operator with changed semantics is detected in the serialized Torchscript. This will prevent Torchscript programs that use the changed operator from loading on earlier versions of PyTorch, as desired, but will have no impact on programs that don't use the changed operator.

Note that this change is only applicable when using torch.jit.save and torch.jit.load. torch.save pickles the given object using pickle (by default), which saves a function's Python directly.

No new tests for this behavior are added since the existing tests for versioned division in test_save_load already validate that models with div are loaded correctly at version 4.
Pull Request resolved: pytorch#40279

Reviewed By: dzhulgakov

Differential Revision: D22168291

Pulled By: mruberry

fbshipit-source-id: e71d6380e727e25123c7eedf6d80e5d7f1fe9f95
Summary:
Partially fixes pytorch#38911
Pull Request resolved: pytorch#39681

Differential Revision: D22161342

Pulled By: mrshenli

fbshipit-source-id: 60295077159b02087823e93bb6ebac9d70adea0a
Summary: Pull Request resolved: pytorch#40483

Reviewed By: ezyang

Differential Revision: D22213696

Pulled By: ngimel

fbshipit-source-id: 0321eee8fcaf144b20a5182aa76f98d505c65400
Summary:
PyTorch should stop polluting global namespace with symbols such as `ERROR` `WARNING` and `INFO`.
Since `logging_is_not_google_glog.h` is a C++ header, define severity levels in namespace and add `GLOG_` prefix to match an unshortened glog severity levels.
Change `LOG` and `LOG_IF` macros to use prefix + namespaced severity levels.

Closes pytorch#40083
Pull Request resolved: pytorch#40491

Test Plan: CI

Reviewed By: ezyang

Differential Revision: D22210925

Pulled By: malfet

fbshipit-source-id: 0ec1181a53baa8bca2f526f245e398582304aeab
…on (pytorch#40241)

Summary:
Pull Request resolved: pytorch#40241

We abort incomplete NCCL Communicators in the ProcessGroupNCCL
destructor, otherwise pending NCCL communciators may block other CUDA ops.

Closes: pytorch#32231
ghstack-source-id: 106469423

Test Plan: CI/Sandcastle

Reviewed By: jiayisuse

Differential Revision: D22103662

fbshipit-source-id: 1f6f88b56bd7a5e9ca5a41698995a76e60e8ad9f
…torch#40404)

Summary:
Pull Request resolved: pytorch#40404

Adds docs to the finish function in ProcessGroup::Work. It's better to have some documentation around these functions since we have some PR's with API-changes/optimizations for these work-level functions here and in the subclasses.
ghstack-source-id: 106381736

Test Plan: CI (Docs change only)

Differential Revision: D22174891

fbshipit-source-id: 7901ea3b35caf6f69f37178ca574104d3412de28
…ytorch#40405)

Summary:
Pull Request resolved: pytorch#40405

This adds a finishAndThrow function that completes the work object,
sets an exception if one is provided by the user, and throws an exception (if
it is already set or passed by the caller). This is now done by grabbing the
lock just once and simplifies the wait functions in ProcessGroupGloo.
ghstack-source-id: 106516114

Test Plan: CI

Differential Revision: D22174890

fbshipit-source-id: ea74702216c4328187c8d193bf39e1fea43847f6
Summary:
Addresses pytorch#40485.
Pull Request resolved: pytorch#40486

Differential Revision: D22217493

Pulled By: malfet

fbshipit-source-id: 6654c3b53e8af063b508f91728e58262ffbab053
Summary:
pytorch#24697
VitalyFedyunin
glaringlee

Test script:
```Python
import timeit

setup_ones = """
import torch
a = torch.ones(({n}, {n}), dtype={dtype})
b = torch.ones(({n}, {n}), dtype={dtype})
"""

for n, t in [(1000, 10000), (2000, 10000)]:
  for dtype in ('torch.bool', 'torch.int', 'torch.long', 'torch.bfloat16', 'torch.float', 'torch.double'):
  #for dtype in ('torch.bool', 'torch.int', 'torch.long', 'torch.float', 'torch.double'):
    print('torch.ones(({n}, {n})) equal for {t} times {dtype}'.format(n=n, t=t, dtype=dtype))
    print(timeit.timeit(stmt='torch.equal(a, b)', setup=setup_ones.format(n=n, dtype=dtype), number=t))

setup_rand = """
import torch
a = torch.rand(({n}, {n}), dtype={dtype})
b = a.clone()
"""
for n, t in [(1000, 10000), (2000, 10000)]:
  for dtype in ('torch.float', 'torch.double'):
    print('torch.rand(({n}, {n})) for {t} times {dtype}'.format(n=n, t=t, dtype=dtype))
    print(timeit.timeit(stmt='torch.equal(a, b)', setup=setup_rand.format(n=n, dtype=dtype), number=t))

setup_non_contiguous = """
import torch
a = torch.rand(({n}, {n}), dtype={dtype})
a2 = a[:, 500:]
a3 = a2.clone()
torch.equal(a2, a3)
"""
for n, t in [(1000, 10000), (2000, 10000)]:
  for dtype in ('torch.float', 'torch.double'):
    print('non_contiguous torch.rand(({n}, {n})) for {t} times {dtype}'.format(n=n, t=t, dtype=dtype))
    print(timeit.timeit(stmt='torch.equal(a2, a3)', setup=setup_non_contiguous.format(n=n, dtype=dtype), number=t))

setup_not_equal = """
import torch
a = torch.rand(({n}, {n}), dtype={dtype})
b = torch.rand(({n}, {n}), dtype={dtype})
torch.equal(a, b)
"""
for n, t in [(1000, 10000), (2000, 10000)]:
  for dtype in ('torch.float', 'torch.double'):
    print('not equal torch.rand(({n}, {n})) for {t} times {dtype}'.format(n=n, t=t, dtype=dtype))
    print(timeit.timeit(stmt='torch.equal(a, b)', setup=setup_not_equal.format(n=n, dtype=dtype), number=t))
```

TH
```
torch.ones((1000, 1000)) equal for 10000 times torch.bool
1.8391206220258027
torch.ones((1000, 1000)) equal for 10000 times torch.int
1.8877864250680432
torch.ones((1000, 1000)) equal for 10000 times torch.long
1.938108820002526
torch.ones((1000, 1000)) equal for 10000 times torch.bfloat16
3.184849138953723
torch.ones((1000, 1000)) equal for 10000 times torch.float
1.8825413499725983
torch.ones((1000, 1000)) equal for 10000 times torch.double
2.7266416549682617
torch.ones((2000, 2000)) equal for 10000 times torch.bool
7.227149627986364
torch.ones((2000, 2000)) equal for 10000 times torch.int
7.76215292501729
torch.ones((2000, 2000)) equal for 10000 times torch.long
9.631909006042406
torch.ones((2000, 2000)) equal for 10000 times torch.bfloat16
8.097328286035918
torch.ones((2000, 2000)) equal for 10000 times torch.float
5.5739822529722005
torch.ones((2000, 2000)) equal for 10000 times torch.double
8.444009944912978
torch.rand((1000, 1000)) for 10000 times torch.float
1.168096570065245
torch.rand((1000, 1000)) for 10000 times torch.double
1.6577326939441264
torch.rand((2000, 2000)) for 10000 times torch.float
5.49395391496364
torch.rand((2000, 2000)) for 10000 times torch.double
8.507486199960113
non_contiguous torch.rand((1000, 1000)) for 10000 times torch.float
6.074504268006422
non_contiguous torch.rand((1000, 1000)) for 10000 times torch.double
6.1426916810451075
non_contiguous torch.rand((2000, 2000)) for 10000 times torch.float
37.501055537955835
non_contiguous torch.rand((2000, 2000)) for 10000 times torch.double
44.6880351039581
not equal torch.rand((1000, 1000)) for 10000 times torch.float
0.029356416082009673
not equal torch.rand((1000, 1000)) for 10000 times torch.double
0.025421109050512314
not equal torch.rand((2000, 2000)) for 10000 times torch.float
0.026333761983551085
not equal torch.rand((2000, 2000)) for 10000 times torch.double
0.02748022007290274
```

ATen
```
torch.ones((1000, 1000)) equal for 10000 times torch.bool
0.7961567062884569
torch.ones((1000, 1000)) equal for 10000 times torch.int
0.49172434909269214
torch.ones((1000, 1000)) equal for 10000 times torch.long
0.9459248608909547
torch.ones((1000, 1000)) equal for 10000 times torch.bfloat16
2.0877483217045665
torch.ones((1000, 1000)) equal for 10000 times torch.float
0.606857153121382
torch.ones((1000, 1000)) equal for 10000 times torch.double
1.1388208279386163
torch.ones((2000, 2000)) equal for 10000 times torch.bool
2.0329296849668026
torch.ones((2000, 2000)) equal for 10000 times torch.int
3.534358019940555
torch.ones((2000, 2000)) equal for 10000 times torch.long
8.19841272290796
torch.ones((2000, 2000)) equal for 10000 times torch.bfloat16
6.595649406313896
torch.ones((2000, 2000)) equal for 10000 times torch.float
4.193911510054022
torch.ones((2000, 2000)) equal for 10000 times torch.double
7.931309659034014
torch.rand((1000, 1000)) for 10000 times torch.float
0.8877940969541669
torch.rand((1000, 1000)) for 10000 times torch.double
1.4142901846207678
torch.rand((2000, 2000)) for 10000 times torch.float
4.010025603231043
torch.rand((2000, 2000)) for 10000 times torch.double
8.126411964651197
non_contiguous torch.rand((1000, 1000)) for 10000 times torch.float
0.602473056409508
non_contiguous torch.rand((1000, 1000)) for 10000 times torch.double
0.6784545010887086
non_contiguous torch.rand((2000, 2000)) for 10000 times torch.float
3.0991827426478267
non_contiguous torch.rand((2000, 2000)) for 10000 times torch.double
5.719010795000941
not equal torch.rand((1000, 1000)) for 10000 times torch.float
0.046060710679739714
not equal torch.rand((1000, 1000)) for 10000 times torch.double
0.036034489050507545
not equal torch.rand((2000, 2000)) for 10000 times torch.float
0.03686975734308362
not equal torch.rand((2000, 2000)) for 10000 times torch.double
0.04189508780837059
```
Pull Request resolved: pytorch#33286

Differential Revision: D22211962

Pulled By: glaringlee

fbshipit-source-id: a5c48f328432c1996f28e19bc75cb495fb689f6b
Summary:
Update the following feature classifications in docs to align with the changes:
1. [High Level Autograd APIs](https://pytorch.org/docs/stable/autograd.html#functional-higher-level-api): Beta (was experimental)
2. [Eager Mode Quantization](https://pytorch.org/docs/stable/quantization.html): Beta (was experimental)
3. [Named Tensors](https://pytorch.org/docs/stable/named_tensor.html): Prototype (was experimental)
4. [TorchScript/RPC](https://pytorch.org/docs/stable/rpc.html#rpc): Prototype (was experimental)
5. [Channels Last Memory Layout](https://pytorch.org/docs/stable/tensor_attributes.html#torch-memory-format): Beta (was experimental)
6. [Custom C++ Classes](https://pytorch.org/docs/stable/cpp_index.html): Beta (was experimental)
7. [Torch.Sparse](https://pytorch.org/docs/stable/sparse.html): Beta (was experimental)
Pull Request resolved: pytorch#39966

Differential Revision: D22213217

Pulled By: jlin27

fbshipit-source-id: dc49337cbc7026ed8dcac506fc60029dc3add854
Summary:
Pull Request resolved: pytorch#40424

dictConstruct should preserve the inputs order

Test Plan: Imported from OSS

Differential Revision: D22202690

Pulled By: wanchaol

fbshipit-source-id: c313b531b7fa49e6f3486396d61bfc5d6400cd01
…9601)

Summary: Pull Request resolved: pytorch#39601

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D22202689

Pulled By: wanchaol

fbshipit-source-id: 5271eb3d8fdcda3d730a085aa555b43c35d14876
…rch#40522)

Summary: Pull Request resolved: pytorch#40522

Differential Revision: D22215685

Pulled By: AshkanAliabadi

fbshipit-source-id: 78c103c4f7ad21e78069dc86a8ee47aebc9aa73e
…rch#40520)

Summary: Pull Request resolved: pytorch#40520

Differential Revision: D22215614

Pulled By: AshkanAliabadi

fbshipit-source-id: 5e41a3a69522cbfe1cc4ac76a0d1f3e90a58528d
Summary:
Pull Request resolved: pytorch#40525

Move `USE_CUDNN` define under `USE_CUDA` guard, add `cuda/shared/cudnn.cpp` to filelist if either USE_ROCM or USE_CUDNN is set.
This is a prep change for PyTorch CUDA src filelist unification change.

Test Plan: CI

Differential Revision: D22214899

fbshipit-source-id: b71b32fc603783b41cdef0e7fab2cc9cbe750a4e
…ytorch#40516)

Summary: Pull Request resolved: pytorch#40516

Differential Revision: D22215554

Pulled By: AshkanAliabadi

fbshipit-source-id: f779cf6e08cf344b87071c2ffc9b3f7cf4659085
…#40451)

Summary:
Fixes pytorchgh-40287

The `int -> bool` conversion takes higher precedence than `int -> IntArrayRef`. So, calling `std(0)` in C++ would select the `std(unbiased=False)` overload instead.
Pull Request resolved: pytorch#40451

Differential Revision: D22217926

Pulled By: ezyang

fbshipit-source-id: 7520792fab5ab6665bddd03b6f57444c6c729af4
…h#40526)

Summary: Pull Request resolved: pytorch#40526

Differential Revision: D22215600

Pulled By: AshkanAliabadi

fbshipit-source-id: 6ff0c17d17f118b64ae34c0007b705c7127f07ef
Summary:
Pull Request resolved: pytorch#40495

As part of debugging flaky ddp_under_dist_autograd tests, I realized
we were running into the following deadlock.

1) Rank 0 would go into DDP construction, hold GIL and wait for broadcast in
DDP construction.
2) Rank 3 is a little slower and performs an RRef fetch call before the DDP
construction.
3) The RRef fetch call is done on Rank 0 and tries to acquire GIL.
4) We now have a deadlock since Rank 0 is waiting for Rank 3 to enter the
collective and Rank 3 is waiting for Rank 0 to release GIL.
ghstack-source-id: 106534442

Test Plan:
1) Ran ddp_under_dist_autograd 500 times.
2) waitforbuildbot

Differential Revision: D22205180

fbshipit-source-id: 6afd55342e801b9edb9591ff25158a244a8ea66a
…#39516)

Summary:
Fixes pytorch#38716, fixes pytorch#37234

This algorithm does the summation along a single axis with multiple "levels" of accumulator, each of which is designed to hold the sum of an order of magnitude more values than the previous.

e.g. if there are 2^16 elements, the first level will hold the sum of 2^4 elements, and so on in increasing powers of 2: 2^4, 2^8, 2^12 and finally 2^16.

This limits the differences in magnitude of the partial results being added together, and so we don't lose accuracy as the axis length increases.

WIP to write a vectorized version.
Pull Request resolved: pytorch#39516

Reviewed By: ezyang

Differential Revision: D22106251

Pulled By: ngimel

fbshipit-source-id: b56de4773292439dbda62b91f44ff37715850ae9
Summary:
Add `int8_gen_quant_params.cc` added by
pytorch#40494 to bazel build rules
Pull Request resolved: pytorch#40536

Reviewed By: mruberry

Differential Revision: D22219595

Pulled By: malfet

fbshipit-source-id: 2875a0b9c55bad2b052a898661b96eab490f6451
Summary:
These were changes that had to be made in the `release/1.6` branch in order to get backups to work.

They should be brought to the master branch.
Pull Request resolved: pytorch#40515

Differential Revision: D22221308

Pulled By: seemethere

fbshipit-source-id: 24e2a0196a8e775fe324a383c8f0c681118b741b
Ailing Zhang and others added 26 commits July 6, 2020 13:11
…ytorch#40883)

Summary:
There's is a TODO tracked in pytorch#40882

Pull Request resolved: pytorch#40883

Reviewed By: pbelevich

Differential Revision: D22346087

Pulled By: ailzhang

fbshipit-source-id: b4789ca3a10f6a72c6e77276bde45633eb6cf545
Summary:
Add documentation for dynamic quantized modules

Pull Request resolved: pytorch#40896

Differential Revision: D22395955

Pulled By: z-a-f

fbshipit-source-id: cdc956d1509a0901bc24b73b6ca68a1b65e00cc2
Summary:
Pull Request resolved: pytorch#40903

This PR continues the work of pytorch#38467 - decoupling Autograd and Trace for manually registered ops.
ghstack-source-id: 107158638

Test Plan: CI

Differential Revision: D22354804

fbshipit-source-id: f5ea45ade2850296c62707a2a4449d7d67a9f5b5
Summary:
Pull Request resolved: pytorch#41004

Tracing has been moved into separate files. Now we can disable it by not compiling the source files for xplat mobile build.
ghstack-source-id: 107158627

Test Plan: CI + build size bot

Reviewed By: iseeyuan

Differential Revision: D22372615

fbshipit-source-id: bf2e2249e401295ff63020a292df119b188fb966
…tation (pytorch#41025)

Summary:
Bundle of small edits to fix formatting.

Pull Request resolved: pytorch#41025

Differential Revision: D22398364

Pulled By: mruberry

fbshipit-source-id: 8d484cb52a1cf4a8eb1f64914574250c9fd5043d
Summary: Pull Request resolved: pytorch#40625

Test Plan: Continuous integration.

Reviewed By: suo

Differential Revision: D22259289

fbshipit-source-id: 76cb097dd06a636004fc780b17cb20f27d3821de
…0864)

Summary:
Have basic reduction fusion working, and have improved code generator to approach performance of eager mode reductions. Coming soon will be pointwise-reduction fusions in a way that should prevent the possibility of hitting regressions. Also working on performant softmax kernels in the code generator which may be our next fusion target.

Pull Request resolved: pytorch#40864

Reviewed By: ngimel

Differential Revision: D22392877

Pulled By: soumith

fbshipit-source-id: 457448a807d628b1035f6d90bc0abe8a87bf8447
Summary:
Closes pytorch#40560

This adds the equation for the weighted mean to `CrossEntropyLoss`'s docs and the `reduction` argument for `CrossEntropyLoss` and `NLLLoss` no longer describes a non-weighted mean of the outputs.

Pull Request resolved: pytorch#40991

Differential Revision: D22395805

Pulled By: ezyang

fbshipit-source-id: a623b6dd2aab17220fe0bf706bd9b62d6ba531fd
…ction methods. (pytorch#40962)

Summary:
Follow up to pytorch#36447 . Update for pytorch#33389.

Also removes unused `unordered_map` include from the CPP file.

Pull Request resolved: pytorch#40962

Differential Revision: D22376253

Pulled By: ngimel

fbshipit-source-id: 4e7432190e9a847321aec6d6f6634056fa69bdb8
Summary:
This trick should have no effect on performance, but it reduces size of kernels using the template by 10%
For example, sizeof(BinaryMulDivKernel.cu.o) compiled by CUDA-10.1 toolchain for sm_75 before the change was 4.2Mb, after 3.8Mb

Pull Request resolved: pytorch#40992

Differential Revision: D22398733

Pulled By: malfet

fbshipit-source-id: 6576f4da00dc5fc2575b2313577f52c6571d5e6f
Summary:
Pull Request resolved: pytorch#40856

Add a new activation function - Mish: A Self Regularized Non-Monotonic Neural Activation Function https://arxiv.org/abs/1908.08681

Test Plan:
buck test //caffe2/caffe2/python/operator_test:elementwise_ops_test -- 'test_mish'

{F242275183}

Differential Revision: D22158035

fbshipit-source-id: 459c1dd0ac5b515913fc09b5f4cd13dcf095af31
Summary: Pull Request resolved: pytorch#40795

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D22314215

Pulled By: jamesr66a

fbshipit-source-id: a2fb5c6804d4014f8e437c6858a7be8cd3efb380
Summary:
Fixes pytorch#24557

ASV benchmark:

```
import torch

sizes = [
    (10**6,),
    (1000, 1000),
    (10, 10),
    (1, 2, 3, 4, 5, 6, 7, 8, 9, 10),
]

class EqualTrue:
    params = range(len(sizes))

    def setup(self, n):
        dims = sizes[n]
        self.a = torch.rand(dims, device='cuda')
        self.b = self.a.clone()

    def time_equal(self, n):
        torch.equal(self.a, self.b)

class EqualFalse:
    params = range(len(sizes))

    def setup(self, n):
        dims = sizes[n]
        self.a = torch.rand(dims, device='cuda')
        self.b = torch.rand(dims, device='cuda')

    def time_equal(self, n):
        torch.equal(self.a, self.b)
```

Old results:
```
[ 75.00%] ··· equal.EqualFalse.time_equal
[ 75.00%] ··· ======== ============
               param1
              -------- ------------
                 0       67.7±7μs
                 1       74.0±2μs
                 2      24.4±0.1μs
                 3      135±0.2μs
              ======== ============

[100.00%] ··· equal.EqualTrue.time_equal
[100.00%] ··· ======== ============
               param1
              -------- ------------
                 0      59.8±0.2μs
                 1      59.9±0.3μs
                 2      25.0±0.5μs
                 3      136±0.2μs
              ======== ============
```

New results:
```
[ 75.00%] ··· equal.EqualFalse.time_equal
[ 75.00%] ··· ======== ============
               param1
              -------- ------------
                 0      44.4±0.2μs
                 1      44.5±0.4μs
                 2      31.3±0.3μs
                 3      96.6±0.5μs
              ======== ============

[100.00%] ··· equal.EqualTrue.time_equal
[100.00%] ··· ======== ============
               param1
              -------- ------------
                 0      44.2±0.2μs
                 1      44.6±0.2μs
                 2      30.8±0.3μs
                 3      97.3±0.2μs
              ======== ============
```

Pull Request resolved: pytorch#36483

Differential Revision: D21451829

Pulled By: VitalyFedyunin

fbshipit-source-id: 033e8060192c54f139310aeafe8ba784bab94ded
Summary:
Original commit changeset: 46c59d849fa8

The original commit is breaking DPER3 release pipeline with the following failures:
https://www.internalfb.com/intern/chronos/jobinstance?jobinstanceid=9007207344413239&smc=chronos_gp_admin_client&offset=0
```
Child workflow f 202599639  failed with error: c10::Error: [enforce fail at operator.cc:76] blob != nullptr. op Save: Encountered a non-existing input blob: feature_preproc/feature_sparse_to_dense/default_float_value
```
https://www.internalfb.com/intern/chronos/jobinstance?jobinstanceid=9007207344855973&smc=chronos_gp_admin_client&offset=0
```
Child workflow f 202629391  failed with error: c10::Error: [enforce fail at operator.cc:76] blob != nullptr. op Save: Encountered a non-existing input blob: tum_preproc/inductive/feature_sparse_to_dense/default_float_value
```

Related UBN tasks: T69529846, T68986110

Test Plan: Build a DPER3 package on top of this commit, and check that DPER3 release test `model_deliverability_test` is passing.

Differential Revision: D22396317

fbshipit-source-id: 92d5b30cc146c005d6159a8d5bfe8973e2c546dd
Summary:
Pull Request resolved: pytorch#40938

already accepted in pytorch#40645

Test Plan: Imported from OSS

Reviewed By: jamesr66a, Krovatkin

Differential Revision: D22394675

Pulled By: eellison

fbshipit-source-id: 1e9dbb24a4cb564d9a68280d2166329ca9fb0425
Summary:
Pull Request resolved: pytorch#40939

Previously, when we would do shape analysis by running the op with representative inputs, we would always set the grad property to false. This led to a wrong static analysis when we would create differentiable subgraphs, and propagate shapes without also propagating requires_grad, and then uninline them.

Test Plan: Imported from OSS

Differential Revision: D22394676

Pulled By: eellison

fbshipit-source-id: 254e6e9f964b40d160befe0e125abe1b7aa2bd5e
Summary:
Most time-consuming tests in test_nn (taking about half the time) were gradgradchecks on Conv3d. Reduce their sizes, and, most importantly, run gradgradcheck single-threaded, because that cuts the time of conv3d tests by an order of magnitude, and barely affects other tests.
These changes bring test_nn time down from 1200 s to ~550 s on my machine.

Pull Request resolved: pytorch#40999

Differential Revision: D22396896

Pulled By: ngimel

fbshipit-source-id: 3b247caceb65d64be54499de1a55de377fdf9506
Summary:
Pull Request resolved: pytorch#40717

`in_dims` specifies which dimension of the input tensors should be
vmapped over. One can also specify `None` as an `in_dim` for a particular
input to indicate that we do not map over said input.

We implement `in_dims` by creating a BatchedTensor with BatchDim equal
to said `in_dim`. Most of this PR is error checking. `in_dims` must
satisfy the following:
- `in_dim` can be either an int or a Tuple[Optional[int]]. If it is an
int, we use it to mean the `in_dim` for every input.
- If `in_dims` is not-None at some index `idx`, then the input at index
`idx` MUST be a tensor (vmap can only map over tensors).

jax supports something more generalized: their `in_dims` can match the
structure of the `inputs` to the function (i.e., it is a nested python
data structure matching the data structure of `inputs` specifying where
in `inputs` the Tensors to be mapped are and what their map dims should
be). We don't have the infrastruture yet so we only support `int` or a
flat tuple for `in_dims`.

Test Plan: - `pytest test/test_vmap.py -v`

Differential Revision: D22397914

Pulled By: zou3519

fbshipit-source-id: 56d2e14be8b6024e4cde2729eff384da305b4ea3
Summary:
Closes pytorch#40784

Pull Request resolved: pytorch#41038

Differential Revision: D22404273

Pulled By: malfet

fbshipit-source-id: 8df05f948f069ac95591d523222faa1327429e71
Summary:
I ran `make linkcheck` using `sphinx.builders.linkcheck` on the documentation and noticed a few links weren't using HTTPS so I quickly updated them all.

Pull Request resolved: pytorch#40878

Differential Revision: D22404647

Pulled By: ngimel

fbshipit-source-id: 9c9756db59197304023fddc28f252314f6cf4af3
Summary:
In issue pytorch#36997 the user encountered a non-meaningful error message when trying to export the model to ONNX. The Pad operator in opset 9 requires the list of paddings to be constant. This PR tries to improve the error message given to the user when this is not the case.

Pull Request resolved: pytorch#39651

Reviewed By: hl475

Differential Revision: D21992262

Pulled By: houseroad

fbshipit-source-id: b817111c2a40deba85e4c6cdb874c1713312dba1
Summary:
Fix export of full_like when fill_value is of type torch._C.Value.

This PR fixes a bug when exporting GPT2DoubleHeadsModel huggingface/transformers#4950

Pull Request resolved: pytorch#40063

Reviewed By: hl475

Differential Revision: D22398353

Pulled By: houseroad

fbshipit-source-id: 6980a61211fe571c2e4a57716970f474851d811e
Summary:
This PR adds support for the torch `view_as` operator.

Pull Request resolved: pytorch#40496

Reviewed By: hl475

Differential Revision: D22398318

Pulled By: houseroad

fbshipit-source-id: f92057f9067a201b707aa9b8fc4ad34643dd5fa3
Summary:
It's a known gcc-5.4 bug that enum class is not hasheable by default, so `std::unordered_map` needs 3rd explicit parameters to compute hash from the type.

Should fix regression caused by pytorch#40864

Pull Request resolved: pytorch#41055

Differential Revision: D22405478

Pulled By: malfet

fbshipit-source-id: f4bd36bebdc1ad0251ebd1e6cefba866e6605fe6
Summary:
Forgot to add this to pytorch#41055

Pull Request resolved: pytorch#41063

Differential Revision: D22407451

Pulled By: malfet

fbshipit-source-id: 6f06653b165cc4817d134657f87caf643182832a
Summary:
Pull Request resolved: pytorch#41023

Remove Logger in get_matching_activations since it's not used.
ghstack-source-id: 107237046

Test Plan:
buck test mode/dev caffe2/test:quantization -- 'test_compare_weights_lstm_dynamic'
buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub_lstm_dynamic'
buck test mode/dev caffe2/test:quantization -- 'test_compare_model_outputs_lstm_dynamic'
buck test mode/dev caffe2/test:quantization -- 'test_compare_weights_conv_static'
buck test mode/dev caffe2/test:quantization -- 'test_compare_weights_linear_static'
buck test mode/dev caffe2/test:quantization -- 'test_compare_weights_linear_dynamic'
buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub_conv_static'
buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub_linear_static'
buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub_submodule_static'
buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub_functional_static'
buck test mode/dev caffe2/test:quantization -- 'test_compare_model_stub_linear_dynamic'
buck test mode/dev caffe2/test:quantization -- 'test_compare_model_outputs_conv_static'
buck test mode/dev caffe2/test:quantization -- 'test_compare_model_outputs_linear_static'
buck test mode/dev caffe2/test:quantization -- 'test_compare_model_outputs_functional_static'
buck test mode/dev caffe2/test:quantization -- 'test_compare_model_outputs_linear_dynamic'

Differential Revision: D22394957

fbshipit-source-id: 7d59e0f35e9f4c304b8487460d48236ee6e5a872
@csarofeen
Copy link
Owner Author

conflict showed up here, just pushed.

@csarofeen csarofeen closed this Jul 7, 2020
ftxj pushed a commit to ftxj/pytorch that referenced this pull request May 25, 2023
When tensor is resized, reference array to it's sizes may become invalid. Make a copy in advance.

<details>
<summary>ASAN report</summary>

```
=================================================================
==1115867==ERROR: AddressSanitizer: heap-use-after-free on address 0x61000013d790 at pc 0x03ff8e7da360 bp 0x03fff53c83a0 sp 0x03fff53c8390
READ of size 8 at 0x61000013d790 thread T0
    #0 0x3ff8e7da35f in c10::SymInt::is_heap_allocated() const /home/user/pytorch/c10/core/SymInt.h:154
    #1 0x3ff8e7da35f in c10::SymInt::maybe_as_int() const /home/user/pytorch/c10/core/SymInt.h:215
    csarofeen#2 0x3ff8e7d0a6d in c10::SymInt::sym_eq(c10::SymInt const&) const /home/user/pytorch/c10/core/SymInt.cpp:69
    csarofeen#3 0x3ff7a9ab0bd in c10::SymInt::operator==(c10::SymInt const&) const /home/user/pytorch/c10/core/SymInt.h:177
    csarofeen#4 0x3ff7a9aaedd in bool std::__equal<false>::equal<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-
v11/bits/stl_algobase.h:1162
    csarofeen#5 0x3ff7a9aae4b in bool std::__equal_aux1<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/
stl_algobase.h:1211
    csarofeen#6 0x3ff7a9aae05 in bool std::__equal_aux<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/s
tl_algobase.h:1219
    csarofeen#7 0x3ff7a9aad97 in bool std::equal<c10::SymInt const*, c10::SymInt const*>(c10::SymInt const*, c10::SymInt const*, c10::SymInt const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_alg
obase.h:1556
    csarofeen#8 0x3ff4b23c771 in c10::ArrayRef<c10::SymInt>::equals(c10::ArrayRef<c10::SymInt>) const /home/user/pytorch/c10/util/ArrayRef.h:188
    csarofeen#9 0x3ff4cb91bc1 in bool c10::operator!=<c10::SymInt>(c10::ArrayRef<c10::SymInt>, c10::ArrayRef<c10::SymInt>) /home/user/pytorch/c10/util/ArrayRef.h:341
    csarofeen#10 0x3ff6d1b57ff in torch::ADInplaceOrView::resize_(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/torch/csrc/autograd/Variab
leTypeManual.cpp:408
    csarofeen#11 0x3ff6d1e59c7 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c1
0::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>
> >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
    csarofeen#12 0x3ff6d1e59c7 in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10:
:ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::Sy
mInt>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::Disp
atchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:480
    csarofeen#13 0x3ff51ca5129 in at::Tensor const& c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(void*, c10::OperatorKernel*,
c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>&&, c10::optional<c10::MemoryFormat>&&) /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50
    csarofeen#14 0x3ff51ca6e8f in at::Tensor const& c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::OperatorHandle const&, c10::D
ispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:90
    csarofeen#15 0x3ff51ca6e8f in at::Tensor const& c10::Dispatcher::redispatch<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Ten
sor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)
const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:656
    csarofeen#16 0x3ff5182006b in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::redispatch(c10::DispatchKeySet, at::Tensor const&, c
10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:492
    csarofeen#17 0x3ff5182006b in at::_ops::resize_::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) aten/src/ATen/Operators_4.cpp:2144
    csarofeen#18 0x3ff6d1d5e07 in at::redispatch::resize__symint(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) aten/src/ATen/RedispatchFunctions.h:2847
    csarofeen#19 0x3ff6d1bbb67 in torch::autograd::VariableType::(anonymous namespace)::resize_(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pyto
rch/torch/csrc/autograd/VariableTypeManual.cpp:243
    csarofeen#20 0x3ff6d1bd197 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c1
0::MemoryFormat>), &torch::autograd::VariableType::(anonymous namespace)::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10
::optional<c10::MemoryFormat> > >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFu
nctionIntoFunctor.h:13
    csarofeen#21 0x3ff6d1bd197 in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10:
:ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>), &torch::autograd::VariableType::(anonymous namespace)::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor
 const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c
10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor
.h:480
    csarofeen#22 0x3ff51ca5129 in at::Tensor const& c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(void*, c10::OperatorKernel*,
c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>&&, c10::optional<c10::MemoryFormat>&&) /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50
    csarofeen#23 0x3ff5181ead1 in at::Tensor const& c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::OperatorHandle const&, c10::D
ispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:90
    csarofeen#24 0x3ff5181ead1 in at::Tensor const& c10::Dispatcher::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Tensor co
nst& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/user/pytorch/at
en/src/ATen/core/dispatch/Dispatcher.h:639
    csarofeen#25 0x3ff5181ead1 in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(at::Tensor const&, c10::ArrayRef<c10::SymInt>,
c10::optional<c10::MemoryFormat>) const /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:487
    csarofeen#26 0x3ff5181ead1 in at::_ops::resize_::call(at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) aten/src/ATen/Operators_4.cpp:2137
    csarofeen#27 0x3ff79b44fcf in at::Tensor::resize__symint(c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const aten/src/ATen/core/TensorBody.h:2452
    csarofeen#28 0x3ff79a802db in torch::autograd::THPVariable_resize_(_object*, _object*, _object*)::$_0::operator()(at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const /home/us
er/pytorch/torch/csrc/autograd/generated/python_variable_methods.cpp:13417
    csarofeen#29 0x3ff7999f1eb in torch::autograd::THPVariable_resize_(_object*, _object*, _object*) /home/user/pytorch/torch/csrc/autograd/generated/python_variable_methods.cpp:13419
    csarofeen#30 0x3ffa2c9b009 in method_vectorcall_VARARGS_KEYWORDS Objects/descrobject.c:344
    csarofeen#31 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#32 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#33 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#34 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    csarofeen#35 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#36 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#37 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#38 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
    csarofeen#39 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    csarofeen#40 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    csarofeen#41 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    csarofeen#42 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    csarofeen#43 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#44 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#45 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#46 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
    csarofeen#47 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    csarofeen#48 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    csarofeen#49 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    csarofeen#50 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    csarofeen#51 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#52 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#53 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#54 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#55 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#56 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#57 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    csarofeen#58 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#59 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#60 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#61 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#62 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#63 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#64 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#65 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#66 0x3ffa2dff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    csarofeen#67 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#68 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#69 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#70 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#71 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#72 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#73 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    csarofeen#74 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#75 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#76 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#77 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#78 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#79 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#80 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#81 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#82 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    csarofeen#83 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#84 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#85 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#86 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#87 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#88 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#89 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#90 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#91 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    csarofeen#92 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#93 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#94 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#95 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#96 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#97 0x3ffa2c8ab9b in PyVectorcall_Call Objects/call.c:267
    csarofeen#98 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    csarofeen#99 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    csarofeen#100 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    csarofeen#101 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    csarofeen#102 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#103 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#104 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#105 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    csarofeen#106 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431
    csarofeen#107 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494
    csarofeen#108 0x3ffa2c8a933 in _PyObject_MakeTpCall Objects/call.c:215
    csarofeen#109 0x3ffa2df0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    csarofeen#110 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#111 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#112 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    csarofeen#113 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#114 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#115 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#116 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#117 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#118 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#119 0x3ffa2dff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    csarofeen#120 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#121 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#122 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#123 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
    csarofeen#124 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    csarofeen#125 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    csarofeen#126 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    csarofeen#127 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    csarofeen#128 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#129 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#130 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#131 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#132 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#133 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#134 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    csarofeen#135 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#136 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#137 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#138 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#139 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#140 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#141 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#142 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#143 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    csarofeen#144 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#145 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#146 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#147 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    csarofeen#148 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431
    csarofeen#149 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494
    csarofeen#150 0x3ffa2c8ad17 in _PyObject_Call Objects/call.c:305
    csarofeen#151 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    csarofeen#152 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    csarofeen#153 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    csarofeen#154 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#155 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#156 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#157 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#158 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#159 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#160 0x3ffa2dff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    csarofeen#161 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#162 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#163 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#164 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#165 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#166 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#167 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#168 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#169 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    csarofeen#170 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#171 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#172 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#173 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
    csarofeen#174 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    csarofeen#175 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    csarofeen#176 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    csarofeen#177 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    csarofeen#178 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#179 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#180 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#181 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#182 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#183 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#184 0x3ffa2dff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    csarofeen#185 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#186 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#187 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#188 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#189 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#190 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#191 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    csarofeen#192 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#193 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#194 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#195 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
    csarofeen#196 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    csarofeen#197 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    csarofeen#198 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    csarofeen#199 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    csarofeen#200 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#201 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#202 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#203 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#204 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#205 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#206 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    csarofeen#207 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#208 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#209 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#210 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#211 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#212 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#213 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#214 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#215 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    csarofeen#216 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#217 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#218 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#219 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    csarofeen#220 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431
    csarofeen#221 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494
    csarofeen#222 0x3ffa2c8a933 in _PyObject_MakeTpCall Objects/call.c:215
    csarofeen#223 0x3ffa2df0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    csarofeen#224 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#225 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#226 0x3ffa2dffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    csarofeen#227 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#228 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#229 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#230 0x3ffa2c8ab15 in PyVectorcall_Call Objects/call.c:255
    csarofeen#231 0x3ffa2c8ac65 in _PyObject_Call Objects/call.c:290
    csarofeen#232 0x3ffa2c8ada9 in PyObject_Call Objects/call.c:317
    csarofeen#233 0x3ffa2e059c7 in do_call_core Python/ceval.c:5943
    csarofeen#234 0x3ffa2dffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    csarofeen#235 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#236 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#237 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#238 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#239 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#240 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#241 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    csarofeen#242 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#243 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#244 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#245 0x3ffa2c8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#246 0x3ffa2c8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#247 0x3ffa2df00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#248 0x3ffa2df013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#249 0x3ffa2e05447 in call_function Python/ceval.c:5891
    csarofeen#250 0x3ffa2dff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    csarofeen#251 0x3ffa2df052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#252 0x3ffa2e02b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#253 0x3ffa2c8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#254 0x3ffa2c8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    csarofeen#255 0x3ffa2c8b271 in _PyObject_Call_Prepend Objects/call.c:431
    csarofeen#256 0x3ffa2d3f307 in slot_tp_call Objects/typeobject.c:7494
    csarofeen#257 0x3ffa2c8a933 in _PyObject_MakeTpCall Objects/call.c:215

0x61000013d790 is located 80 bytes inside of 192-byte region [0x61000013d740,0x61000013d800)
freed by thread T0 here:
    #0 0x3ffa3237de5 in operator delete(void*) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160
    #1 0x3ff8e7e3221 in c10::TensorImpl::~TensorImpl() /home/user/pytorch/c10/core/TensorImpl.cpp:75

previously allocated by thread T0 here:
    #0 0x3ffa323734f in operator new(unsigned long) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:99
    #1 0x3ff4aeeb3d1 in c10::intrusive_ptr<c10::TensorImpl, c10::detail::intrusive_target_default_null_type<c10::TensorImpl> > c10::intrusive_ptr<c10::TensorImpl, c10::detail::intrusive_target_default_nul
l_type<c10::TensorImpl> >::make<c10::intrusive_ptr<c10::StorageImpl, c10::detail::intrusive_target_default_null_type<c10::StorageImpl> >, c10::DispatchKeySet&, caffe2::TypeMeta&>(c10::intrusive_ptr<c10::S
torageImpl, c10::detail::intrusive_target_default_null_type<c10::StorageImpl> >&&, c10::DispatchKeySet&, caffe2::TypeMeta&) /home/user/pytorch/c10/util/intrusive_ptr.h:498
    csarofeen#2 0x3ff76f79e17  (/home/user/pytorch/build/lib.linux-s390x-cpython-310/torch/lib/libtorch_cpu.so+0x2fb79e17)

SUMMARY: AddressSanitizer: heap-use-after-free /home/user/pytorch/c10/core/SymInt.h:154 in c10::SymInt::is_heap_allocated() const
Shadow bytes around the buggy address:
  0x100c2000027aa0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
  0x100c2000027ab0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x100c2000027ac0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
  0x100c2000027ad0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x100c2000027ae0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
=>0x100c2000027af0: fd fd[fd]fd fd fd fd fd fd fd fd fd fd fd fd fd
  0x100c2000027b00: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00
  0x100c2000027b10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x100c2000027b20: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00
  0x100c2000027b30: 00 00 00 00 04 fa fa fa fa fa fa fa fa fa fa fa
  0x100c2000027b40: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
  Shadow gap:              cc
==1115867==ABORTING
```
</details>

<details>
<summary>Additional backtraces (not full)</summary>

Memory deallocation:
```
#0  operator delete (ptr=0x61000013d740) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160
#1  0x000003ffa77e3222 in c10::TensorImpl::~TensorImpl (this=0x61000013d740) at /home/user/pytorch/c10/core/TensorImpl.cpp:75
csarofeen#2  0x000003ff63e76e8c in c10::intrusive_ptr<c10::TensorImpl, c10::UndefinedTensorImpl>::reset_ (this=0x3ffd7ec8230) at /home/user/pytorch/c10/util/intrusive_ptr.h:291
csarofeen#3  0x000003ff63e76910 in c10::intrusive_ptr<c10::TensorImpl, c10::UndefinedTensorImpl>::~intrusive_ptr (this=0x3ffd7ec8230) at /home/user/pytorch/c10/util/intrusive_ptr.h:370
csarofeen#4  0x000003ff63e67240 in at::TensorBase::~TensorBase (this=0x3ffd7ec8230) at /home/user/pytorch/aten/src/ATen/core/TensorBase.h:80
csarofeen#5  0x000003ff63e85ee0 in at::Tensor::~Tensor (this=0x3ffd7ec8230) at aten/src/ATen/core/TensorBody.h:90
csarofeen#6  0x000003ff63f67304 in resize__functionalization (dispatchKeySet=..., self=..., size=..., memory_format=...) at /home/user/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:173
csarofeen#7  0x000003ff63f89258 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>), &(resize__functionalization(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>))>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>) (
    this=0x6030000390a0, args=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
csarofeen#8  c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>), &(resize__functionalization(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>))>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>) (functor=0x6030000390a0, dispatchKeySet=..., args=..., args=...,
    args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:480
csarofeen#9  0x000003ff6aca560a in c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > (
    unboxed_kernel_func=0x3ff63f88a80 <c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tenso
r const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>), &(resize__functionalization(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>))>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<long>, c10::optional<c10::MemoryFormat>)>, functor=0x6030000390a0,
    dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50
csarofeen#10 0x000003ff6aca715c in c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > (this=0x6210005e1b28, opHandle=...,
    dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:96
csarofeen#11 c10::Dispatcher::redispatch<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const (
    this=0x3ff919400e0 <c10::Dispatcher::realSingleton()::_singleton>, op=..., currentDispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:656
csarofeen#12 0x000003ff6a82006c in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const (
    this=0x3ff919a07e0 <at::_ops::resize_::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)::op>, currentDispatchKeySet=..., args=...,
    args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:492
csarofeen#13 at::_ops::resize_::redispatch (dispatchKeySet=..., self=..., size=..., memory_format=...) at /home/user/pytorch/build/aten/src/ATen/Operators_4.cpp:2144
csarofeen#14 0x000003ff861d5e08 in at::redispatch::resize__symint (dispatchKeySet=..., self=..., size=..., memory_format=...) at aten/src/ATen/RedispatchFunctions.h:2847
csarofeen#15 0x000003ff861b579e in torch::ADInplaceOrView::resize_ (ks=..., self=..., size=..., optional_memory_format=...) at /home/user/pytorch/torch/csrc/autograd/VariableTypeManual.cpp:401
```

Memory access:
```
#0  c10::SymInt::maybe_as_int (this=0x61000013d790) at /home/user/pytorch/c10/core/SymInt.h:215
#1  0x000003ff734d0a6e in c10::SymInt::sym_eq (this=0x61000013d790, sci=...) at /home/user/pytorch/c10/core/SymInt.cpp:69
csarofeen#2  0x000003ff5f6ab0be in c10::SymInt::operator== (this=0x61000013d790, o=...) at /home/user/pytorch/c10/core/SymInt.h:177
csarofeen#3  0x000003ff5f6aaede in std::__equal<false>::equal<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1162
csarofeen#4  0x000003ff5f6aae4c in std::__equal_aux1<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1211
csarofeen#5  0x000003ff5f6aae06 in std::__equal_aux<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1219
csarofeen#6  0x000003ff5f6aad98 in std::equal<c10::SymInt const*, c10::SymInt const*> (__first1=0x61000013d790, __last1=0x61000013d7a0, __first2=0x602000015c30)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_algobase.h:1556
csarofeen#7  0x000003ff2ff3c772 in c10::ArrayRef<c10::SymInt>::equals (this=0x3ffed7c9900, RHS=...) at /home/user/pytorch/c10/util/ArrayRef.h:188
csarofeen#8  0x000003ff31891bc2 in c10::operator!=<c10::SymInt> (a1=..., a2=...) at /home/user/pytorch/c10/util/ArrayRef.h:341
csarofeen#9  0x000003ff51eb5800 in torch::ADInplaceOrView::resize_ (ks=..., self=..., size=..., optional_memory_format=...) at /home/user/pytorch/torch/csrc/autograd/VariableTypeManual.cpp:408
csarofeen#10 0x000003ff51ee59c8 in c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c
10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>
 > >::operator()(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) (this=0x6030007dca40, args=..., args=..., args=..., args=...)
    at /home/user/pytorch/aten/src/ATen/core/boxing/impl/WrapFunctionIntoFunctor.h:13
csarofeen#11 c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt
>, c10::optional<c10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<
c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tenso
r const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) (functor=0x6030007dca40, dispatchKeySet=..., args=..., args=..., args=...)
    at /home/user/pytorch/aten/src/ATen/core/boxing/impl/make_boxed_from_unboxed_functor.h:480
csarofeen#12 0x000003ff369a512a in c10::callUnboxedKernelFunction<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > (
    unboxed_kernel_func=0x3ff51ee51f0 <c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor const& (c10::DispatchKeySet, at::Tenso
r const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>), &torch::ADInplaceOrView::resize_>, at::Tensor const&, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&, c10::Ar
rayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > >, at::Tensor const& (c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::call(c10::OperatorKern
el*, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>, functor=0x6030007dca40, dispatchKeySet=..., args=..., args=..., args=...)
    at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:50
csarofeen#13 0x000003ff369a6e90 in c10::KernelFunction::call<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> > (this=0x6210005e1bc8, opHandle=...,
    dispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/boxing/KernelFunction_impl.h:90
csarofeen#14 c10::Dispatcher::redispatch<at::Tensor const&, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat> >(c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::Arr
ayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)> const&, c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const (
    this=0x3ff5d6400e0 <c10::Dispatcher::realSingleton()::_singleton>, op=..., currentDispatchKeySet=..., args=..., args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:656
csarofeen#15 0x000003ff3652006c in c10::TypedOperatorHandle<at::Tensor const& (at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)>::redispatch(c10::DispatchKeySet, at::Tensor const&,
c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>) const (
    this=0x3ff5d6a07e0 <at::_ops::resize_::redispatch(c10::DispatchKeySet, at::Tensor const&, c10::ArrayRef<c10::SymInt>, c10::optional<c10::MemoryFormat>)::op>, currentDispatchKeySet=..., args=...,
    args=..., args=...) at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:492
csarofeen#16 at::_ops::resize_::redispatch (dispatchKeySet=..., self=..., size=..., memory_format=...) at /home/user/pytorch/build/aten/src/ATen/Operators_4.cpp:2144
csarofeen#17 0x000003ff51ed5e08 in at::redispatch::resize__symint (dispatchKeySet=..., self=..., size=..., memory_format=...) at aten/src/ATen/RedispatchFunctions.h:2847
csarofeen#18 0x000003ff51ebbb68 in torch::autograd::VariableType::(anonymous namespace)::resize_ (ks=..., self=..., size=..., optional_memory_format=...)
    at /home/user/pytorch/torch/csrc/autograd/VariableTypeManual.cpp:243
```
</details>
Pull Request resolved: pytorch#101064
Approved by: https://github.com/Skylion007, https://github.com/albanD
ftxj pushed a commit to ftxj/pytorch that referenced this pull request May 25, 2023
arguments() returns vector member of object returned by schema() call.
When object returned by schema() call is destroyed, the vector is deallocated as well,
it's lifetime isn't extended.

This issue detected while running `pytest -v test/mobile/test_lite_script_type.py -k test_nest_typing_namedtuple_custom_classtype` with ASAN.

<details>
<summary>ASAN output</summary>

```
==1134126==ERROR: AddressSanitizer: heap-use-after-free on address 0x60d0005a5790 at pc 0x03ff844488d8 bp 0x03fff584afe8 sp 0x03fff584afd8
READ of size 8 at 0x60d0005a5790 thread T0
    #0 0x3ff844488d7 in __gnu_cxx::__normal_iterator<c10::Argument const*, std::vector<c10::Argument, std::allocator<c10::Argument> > >::__normal_iterator(c10::Argument const* const&) /usr/lib/gcc/s390x-i
bm-linux-gnu/11/include/g++-v11/bits/stl_iterator.h:1028
    #1 0x3ff8444293f in std::vector<c10::Argument, std::allocator<c10::Argument> >::begin() const /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_vector.h:821
    csarofeen#2 0x3ff84d807d1 in torch::jit::toPyObject(c10::IValue) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:617
    csarofeen#3 0x3ff84d80305 in torch::jit::toPyObject(c10::IValue) /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:604
    csarofeen#4 0x3ff84856871 in pybind11::detail::type_caster<c10::IValue, void>::cast(c10::IValue, pybind11::return_value_policy, pybind11::handle) /home/user/pytorch/torch/csrc/jit/python/pybind.h:138
    csarofeen#5 0x3ff85318191 in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is
_method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_me
thod const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)#1}::operator()(pybind11::detail::function_call&) const /home/user/pytorch/cmake/../third_party/pybin
d11/include/pybind11/pybind11.h:249
    csarofeen#6 0x3ff85317cfd in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is
_method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_me
thod const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)#1}::__invoke(pybind11::detail::function_call&) /home/user/pytorch/cmake/../third_party/pybind11/incl
ude/pybind11/pybind11.h:224
    csarofeen#7 0x3ff82ee52e9 in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:929
    csarofeen#8 0x3ffab002903 in cfunction_call Objects/methodobject.c:543
    csarofeen#9 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
    csarofeen#10 0x3ffaaf8e919 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    csarofeen#11 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#12 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#13 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#14 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#15 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    csarofeen#16 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#17 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#18 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#19 0x3ffaaf8a615 in _PyObject_FastCallDictTstate Objects/call.c:142
    csarofeen#20 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
    csarofeen#21 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
    csarofeen#22 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
    csarofeen#23 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    csarofeen#24 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#25 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#26 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    csarofeen#27 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#28 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#29 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#30 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#31 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#32 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#33 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#34 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#35 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    csarofeen#36 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#37 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#38 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#39 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#40 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#41 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#42 0x3ffab0ff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    csarofeen#43 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#44 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#45 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#46 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#47 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#48 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#49 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#50 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#51 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    csarofeen#52 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#53 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#54 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#55 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#56 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#57 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#58 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#59 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#60 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    csarofeen#61 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#62 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#63 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#64 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#65 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#66 0x3ffaaf8ab9b in PyVectorcall_Call Objects/call.c:267
    csarofeen#67 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
    csarofeen#68 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    csarofeen#69 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    csarofeen#70 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    csarofeen#71 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#72 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#73 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#74 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    csarofeen#75 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
    csarofeen#76 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
    csarofeen#77 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
    csarofeen#78 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    csarofeen#79 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#80 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#81 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    csarofeen#82 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#83 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#84 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#85 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#86 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#87 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#88 0x3ffab0ff7d7 in _PyEval_EvalFrameDefault Python/ceval.c:4198
    csarofeen#89 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#90 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#91 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#92 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
    csarofeen#93 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
    csarofeen#94 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    csarofeen#95 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    csarofeen#96 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    csarofeen#97 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#98 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#99 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#100 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#101 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#102 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#103 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    csarofeen#104 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#105 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#106 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#107 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#108 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#109 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#110 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#111 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#112 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    csarofeen#113 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#114 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#115 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#116 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    csarofeen#117 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
    csarofeen#118 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
    csarofeen#119 0x3ffaaf8ad17 in _PyObject_Call Objects/call.c:305
    csarofeen#120 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    csarofeen#121 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    csarofeen#122 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    csarofeen#123 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#124 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#125 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#126 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#127 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#128 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#129 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    csarofeen#130 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#131 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#132 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#133 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#134 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#135 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#136 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#137 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#138 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    csarofeen#139 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#140 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#141 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#142 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
    csarofeen#143 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
    csarofeen#144 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    csarofeen#145 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    csarofeen#146 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    csarofeen#147 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#148 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#149 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#150 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#151 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#152 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#153 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    csarofeen#154 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#155 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#156 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#157 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#158 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#159 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#160 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    csarofeen#161 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#162 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#163 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#164 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
    csarofeen#165 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
    csarofeen#166 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    csarofeen#167 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    csarofeen#168 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    csarofeen#169 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#170 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#171 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#172 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#173 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#174 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#175 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    csarofeen#176 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#177 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#178 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#179 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#180 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#181 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#182 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#183 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#184 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    csarofeen#185 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#186 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#187 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#188 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    csarofeen#189 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
    csarofeen#190 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
    csarofeen#191 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
    csarofeen#192 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    csarofeen#193 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#194 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#195 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    csarofeen#196 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#197 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#198 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#199 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
    csarofeen#200 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290
    csarofeen#201 0x3ffaaf8ada9 in PyObject_Call Objects/call.c:317
    csarofeen#202 0x3ffab1059c7 in do_call_core Python/ceval.c:5943
    csarofeen#203 0x3ffab0ffd39 in _PyEval_EvalFrameDefault Python/ceval.c:4277
    csarofeen#204 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#205 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#206 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#207 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#208 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#209 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#210 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    csarofeen#211 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#212 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#213 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#214 0x3ffaaf8e941 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#215 0x3ffaaf8eddd in method_vectorcall Objects/classobject.c:53
    csarofeen#216 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#216 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#217 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#218 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#219 0x3ffab0ff779 in _PyEval_EvalFrameDefault Python/ceval.c:4181
    csarofeen#220 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#221 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#222 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#223 0x3ffaaf8a695 in _PyObject_FastCallDictTstate Objects/call.c:153
    csarofeen#224 0x3ffaaf8b271 in _PyObject_Call_Prepend Objects/call.c:431
    csarofeen#225 0x3ffab03f307 in slot_tp_call Objects/typeobject.c:7494
    csarofeen#226 0x3ffaaf8a933 in _PyObject_MakeTpCall Objects/call.c:215
    csarofeen#227 0x3ffab0f0081 in _PyObject_VectorcallTstate Include/cpython/abstract.h:112
    csarofeen#228 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#229 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#230 0x3ffab0ffa57 in _PyEval_EvalFrameDefault Python/ceval.c:4231
    csarofeen#231 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#232 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#233 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#234 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#235 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#236 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#237 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    csarofeen#238 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#239 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#240 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#241 0x3ffab0f00a9 in _PyObject_VectorcallTstate Include/cpython/abstract.h:114
    csarofeen#242 0x3ffab0f013d in PyObject_Vectorcall Include/cpython/abstract.h:123
    csarofeen#243 0x3ffab105447 in call_function Python/ceval.c:5891
    csarofeen#244 0x3ffab0ff905 in _PyEval_EvalFrameDefault Python/ceval.c:4213
    csarofeen#245 0x3ffab0f052b in _PyEval_EvalFrame Include/internal/pycore_ceval.h:46
    csarofeen#246 0x3ffab102b67 in _PyEval_Vector Python/ceval.c:5065
    csarofeen#247 0x3ffaaf8aec1 in _PyFunction_Vectorcall Objects/call.c:342
    csarofeen#248 0x3ffaaf8ab15 in PyVectorcall_Call Objects/call.c:255
    csarofeen#249 0x3ffaaf8ac65 in _PyObject_Call Objects/call.c:290

0x60d0005a5790 is located 80 bytes inside of 136-byte region [0x60d0005a5740,0x60d0005a57c8)
freed by thread T0 here:
    #0 0x3ffab537de5 in operator delete(void*) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160
    #1 0x3ff55984fdb in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::deallocate(std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2>*, unsigned long) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:145

previously allocated by thread T0 here:
    #0 0x3ffab53734f in operator new(unsigned long) /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:99
    #1 0x3ff5598443f in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::allocate(unsigned long, void const*) /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:127
    csarofeen#2 0x3fff5849ecf  ([stack]+0xb2ecf)

SUMMARY: AddressSanitizer: heap-use-after-free /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/stl_iterator.h:1028 in __gnu_cxx::__normal_iterator<c10::Argument const*, std::vector<c10::Argument, std::allocator<c10::Argument> > >::__normal_iterator(c10::Argument const* const&)
Shadow bytes around the buggy address:
  0x100c1a000b4aa0: fd fd fd fd fd fd fd fd fd fd fd fa fa fa fa fa
  0x100c1a000b4ab0: fa fa fa fa fd fd fd fd fd fd fd fd fd fd fd fd
  0x100c1a000b4ac0: fd fd fd fd fd fa fa fa fa fa fa fa fa fa fd fd
  0x100c1a000b4ad0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fa
  0x100c1a000b4ae0: fa fa fa fa fa fa fa fa fd fd fd fd fd fd fd fd
=>0x100c1a000b4af0: fd fd[fd]fd fd fd fd fd fd fa fa fa fa fa fa fa
  0x100c1a000b4b00: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x100c1a000b4b10: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x100c1a000b4b20: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x100c1a000b4b30: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x100c1a000b4b40: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
  Shadow gap:              cc
==1134126==ABORTING
```

Additional backtraces (not full):
Allocation:
```
#0  __memset_z196 () at ../sysdeps/s390/memset-z900.S:144
#1  0x000003ff96f3072a in __asan::Allocator::Allocate (this=this@entry=0x3ff97041eb8 <__asan::instance>, size=size@entry=136, alignment=8, alignment@entry=0, stack=<optimized out>,
    stack@entry=0x3ffdbb45d78, alloc_type=<optimized out>, can_fill=true) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_allocator.cpp:599
csarofeen#2  0x000003ff96f2c088 in __asan::asan_memalign (alignment=alignment@entry=0, size=size@entry=136, stack=stack@entry=0x3ffdbb45d78, alloc_type=alloc_type@entry=__asan::FROM_NEW)
    at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_allocator.cpp:1039
csarofeen#3  0x000003ff96fb73b0 in operator new (size=136) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:99
csarofeen#4  0x000003ff41404440 in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::allocate (this=0x3ffdbb468c0,
    __n=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:127
csarofeen#5  0x000003ff414042a0 in std::allocator_traits<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > >::allocate (__a=...,
    __n=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/alloc_traits.h:464
csarofeen#6  0x000003ff41403b66 in std::__allocate_guarded<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > > (__a=...)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/allocated_ptr.h:98
csarofeen#7  0x000003ff4140372a in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::__shared_count<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (this=0x3ffdbb47888, __p=@0x3ffdbb47880: 0x0, __a=..., __args=..., __args=..., __args=..., __args=...)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:648
csarofeen#8  0x000003ff41403328 in std::__shared_ptr<c10::FunctionSchema, (__gnu_cxx::_Lock_policy)2>::__shared_ptr<std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (this=0x3ffdbb47880, __tag=..., __args=..., __args=..., __args=..., __args=...) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:1342
csarofeen#9  0x000003ff41402f06 in std::shared_ptr<c10::FunctionSchema>::shared_ptr<std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (
    this=0x3ffdbb47880, __tag=..., __args=..., __args=..., __args=..., __args=...) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:409
csarofeen#10 0x000003ff41402b6e in std::allocate_shared<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (__a=...,
    __args=..., __args=..., __args=..., __args=...) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:862
csarofeen#11 0x000003ff4140215c in std::make_shared<c10::FunctionSchema, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::vector<c10::Argument, std::allocator<c10::Argument> >, std::vector<c10::Argument, std::allocator<c10::Argument> > > (__args=..., __args=..., __args=..., __args=...)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:878
csarofeen#12 0x000003ff413d180c in c10::TupleType::createWithSpec<c10::basic_string_view<char> > (qualName=..., field_names=std::vector of length 1, capacity 1 = {...},
    field_types=std::vector of length 1, capacity 1 = {...}, field_defaults=std::vector of length 0, capacity 0) at /home/user/pytorch/aten/src/ATen/core/type.cpp:769
csarofeen#13 0x000003ff413b9ca6 in c10::TupleType::createNamed (qualName=..., field_names=std::vector of length 1, capacity 1 = {...}, field_types=std::vector of length 1, capacity 1 = {...})
    at /home/user/pytorch/aten/src/ATen/core/type.cpp:725
csarofeen#14 0x000003ff4115fbac in c10::ivalue::TupleTypeFactory<c10::TupleType>::fallback (type=...) at /home/user/pytorch/aten/src/ATen/core/dynamic_type.cpp:383
csarofeen#15 0x000003ff708217fe in c10::ivalue::Tuple::type<c10::TupleType> (this=0x6080004b8520) at /home/user/pytorch/aten/src/ATen/core/ivalue_inl.h:781
csarofeen#16 0x000003ff70800740 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:613
csarofeen#17 0x000003ff70800306 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:604
csarofeen#18 0x000003ff702d6872 in pybind11::detail::type_caster<c10::IValue, void>::cast (src=...) at /home/user/pytorch/torch/csrc/jit/python/pybind.h:138
csarofeen#19 0x000003ff70d98192 in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)#1}::operator()(pybind11::detail::function_call&) const (this=0x3ffdbb4ca20, call=...)
    at /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:249
csarofeen#20 0x000003ff70d97cfe in pybind11::cpp_function::initialize<torch::jit::initJitScriptBindings(_object*)::$_45, c10::IValue, torch::jit::mobile::Module&, pybind11::tuple const&, pybind11::name, pybind11::is_method, pybind11::sibling, pybind11::arg>(torch::jit::initJitScriptBindings(_object*)::$_45&&, c10::IValue (*)(torch::jit::mobile::Module&, pybind11::tuple const&), pybind11::name const&, pybind11::is_method const&, pybind11::sibling const&, pybind11::arg const&)::{lambda(pybind11::detail::function_call&)#1}::__invoke(pybind11::detail::function_call&) (call=...)
    at /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:224
csarofeen#21 0x000003ff6e9652ea in pybind11::cpp_function::dispatcher (self=<PyCapsule at remote 0x3ff83e27720>,
    args_in=(<torch._C.LiteScriptModule at remote 0x3ff811844b0>, (<Tensor at remote 0x3ff814efb00>,)), kwargs_in=0x0) at /home/user/pytorch/cmake/../third_party/pybind11/include/pybind11/pybind11.h:929
```

Deallocation:
```
#0  operator delete (ptr=0x60d0005a5740) at /var/tmp/portage/sys-devel/gcc-11.3.1_p20230303/work/gcc-11-20230303/libsanitizer/asan/asan_new_delete.cpp:160
#1  0x000003ff44904fdc in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> >::deallocate (this=0x3ffc5dc8020,
    __p=0x60d0005a5740, __t=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/ext/new_allocator.h:145
csarofeen#2  0x000003ff44904fa8 in std::allocator_traits<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > >::deallocate (
    __a=..., __p=0x60d0005a5740, __n=1) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/alloc_traits.h:496
csarofeen#3  0x000003ff449041f2 in std::__allocated_ptr<std::allocator<std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2> > >::~__allocated_ptr (
    this=0x3ffc5dc8030) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/allocated_ptr.h:74
csarofeen#4  0x000003ff44904888 in std::_Sp_counted_ptr_inplace<c10::FunctionSchema, std::allocator<c10::FunctionSchema>, (__gnu_cxx::_Lock_policy)2>::_M_destroy (this=0x60d0005a5740)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:538
csarofeen#5  0x000003ff43895a62 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release (this=0x60d0005a5740) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:184
csarofeen#6  0x000003ff43895420 in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count (this=0x611000c40648) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:705
csarofeen#7  0x000003ff4466e7f4 in std::__shared_ptr<c10::FunctionSchema, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr (this=0x611000c40640)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:1154
csarofeen#8  0x000003ff4466d820 in std::shared_ptr<c10::FunctionSchema>::~shared_ptr (this=0x611000c40640) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:122
csarofeen#9  0x000003ff448d82f6 in c10::TupleType::~TupleType (this=0x611000c40580) at /home/user/pytorch/aten/src/ATen/core/jit_type.h:1142
csarofeen#10 0x000003ff448d8346 in c10::TupleType::~TupleType (this=0x611000c40580) at /home/user/pytorch/aten/src/ATen/core/jit_type.h:1142
csarofeen#11 0x000003ff731296a4 in std::_Sp_counted_ptr<c10::TupleType*, (__gnu_cxx::_Lock_policy)2>::_M_dispose (this=0x603000c43ae0)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:348
csarofeen#12 0x000003ff71eaf666 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release (this=0x603000c43ae0) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:168
csarofeen#13 0x000003ff71eaf330 in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count (this=0x3ffc5dc9368) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:705
csarofeen#14 0x000003ff73129ee4 in std::__shared_ptr<c10::TupleType, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr (this=0x3ffc5dc9360)
    at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr_base.h:1154
csarofeen#15 0x000003ff73122390 in std::shared_ptr<c10::TupleType>::~shared_ptr (this=0x3ffc5dc9360) at /usr/lib/gcc/s390x-ibm-linux-gnu/11/include/g++-v11/bits/shared_ptr.h:122
csarofeen#16 0x000003ff73d00788 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:613
csarofeen#17 0x000003ff73d00306 in torch::jit::toPyObject (ivalue=...) at /home/user/pytorch/torch/csrc/jit/python/pybind_utils.cpp:604
```
</details>
Pull Request resolved: pytorch#101400
Approved by: https://github.com/zou3519
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.