Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release 1.12 Install torch from test channel, Pin builder and xla repo #77983

Merged
merged 1 commit into from
May 20, 2022

Conversation

atalman
Copy link
Contributor

@atalman atalman commented May 20, 2022

Release 1.12 Install torch from test channel, Pin builder and xla repo

@seemethere seemethere changed the base branch from viable/strict to release/1.12 May 20, 2022 17:50
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented May 20, 2022

🔗 Helpful links

❌ 1 New Failures, 1 Base Failures

As of commit d25fbdd (more details on the Dr. CI page):

Expand to see more
  • 1/2 failures introduced in this PR
  • 1/2 broken upstream at merge base a119b7f on May 20 from 10:43am to 2:18pm

🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages

See GitHub Actions build pull / linux-xenial-py3.7-gcc5.4 / test (backwards_compat, 1, 1, linux.2xlarge) (1/1)

Step: "Test" (full log | diagnosis details | 🔁 rerun)

2022-05-20T18:04:13.2011571Z The PR is introduc...m to confirm whether this change is wanted or not.
2022-05-20T18:04:13.1996782Z processing existing schema:  text(__torch__.torch.classes.profiling.SourceRef _0) -> (str _0)
2022-05-20T18:04:13.1998251Z processing existing schema:  count(__torch__.torch.classes.profiling.InstructionStats _0) -> (int _0)
2022-05-20T18:04:13.1999666Z processing existing schema:  duration_ns(__torch__.torch.classes.profiling.InstructionStats _0) -> (int _0)
2022-05-20T18:04:13.2001414Z processing existing schema:  source(__torch__.torch.classes.profiling.SourceStats _0) -> (__torch__.torch.classes.profiling.SourceRef _0)
2022-05-20T18:04:13.2003477Z processing existing schema:  line_map(__torch__.torch.classes.profiling.SourceStats _0) -> (Dict(int, __torch__.torch.classes.profiling.InstructionStats) _0)
2022-05-20T18:04:13.2004269Z processing existing schema:  __init__(__torch__.torch.classes.profiling._ScriptProfile _0) -> (NoneType _0)
2022-05-20T18:04:13.2006193Z processing existing schema:  enable(__torch__.torch.classes.profiling._ScriptProfile _0) -> (NoneType _0)
2022-05-20T18:04:13.2007271Z processing existing schema:  disable(__torch__.torch.classes.profiling._ScriptProfile _0) -> (NoneType _0)
2022-05-20T18:04:13.2010008Z processing existing schema:  _dump_stats(__torch__.torch.classes.profiling._ScriptProfile _0) -> (__torch__.torch.classes.profiling.SourceStats[] _0)
2022-05-20T18:04:13.2011129Z processing existing schema:  __init__(__torch__.torch.classes.dist_rpc.WorkerInfo _0, str _1, int _2) -> (NoneType _0)
2022-05-20T18:04:13.2011571Z The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not. 
2022-05-20T18:04:13.2011603Z 
2022-05-20T18:04:13.2011738Z Broken ops: [
2022-05-20T18:04:13.2012219Z 	aten::max_unpool3d_backward(Tensor grad_output, Tensor self, Tensor indices, int[3] output_size, int[3] stride, int[3] padding) -> (Tensor)
2022-05-20T18:04:13.2012723Z 	aten::max_unpool3d_backward.grad_input(Tensor grad_output, Tensor self, Tensor indices, int[3] output_size, int[3] stride, int[3] padding, *, Tensor(a!) grad_input) -> (Tensor(a!))
2022-05-20T18:04:13.2012984Z 	aten::max_unpool2d_backward(Tensor grad_output, Tensor self, Tensor indices, int[2] output_size) -> (Tensor)
2022-05-20T18:04:13.2013433Z 	aten::max_unpool2d_backward.grad_input(Tensor grad_output, Tensor self, Tensor indices, int[2] output_size, *, Tensor(a!) grad_input) -> (Tensor(a!))
2022-05-20T18:04:13.2013594Z 	aten::_cat(Tensor[] tensors, int dim=0) -> (Tensor)
2022-05-20T18:04:13.2013798Z 	aten::_cat.out(Tensor[] tensors, int dim=0, *, Tensor(a!) out) -> (Tensor(a!))
2022-05-20T18:04:13.2014062Z 	aten::scatter_reduce.two(Tensor self, int dim, Tensor index, str reduce, *, int? output_size=None) -> (Tensor)
2022-05-20T18:04:13.2014416Z 	quantized::conv2d_cudnn(Tensor act, Tensor weight, Tensor? bias, int[] stride, int[] padding, int[] dilation, int groups, float output_scale, int output_zero_point) -> (Tensor)

🚧 1 fixed upstream failure:

These were probably caused by upstream breakages that were already fixed.

Please rebase on the viable/strict branch (expand for instructions)

If your commit is older than viable/strict, run these commands:

git fetch https://github.com/pytorch/pytorch viable/strict
git rebase FETCH_HEAD

This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@seemethere seemethere merged commit 8ff2bc0 into release/1.12 May 20, 2022
@seemethere seemethere deleted the release112_install branch May 20, 2022 17:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants