-
Notifications
You must be signed in to change notification settings - Fork 511
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge branch 'main' into ze.zhang/merge_main #2420
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
- torch version: 2.1.0.dev20230715 - torch commit hash: 6db8e8b9b7ae2232c3ab0eb7fe19830357695c7d - torchvision version: 0.16.0.dev20230715 Co-authored-by: Roll PyTorch Action <[email protected]>
- torch version: 2.1.0.dev20230716 - torch commit hash: c69b6e5da6f5892c2b2bd5fbf28dd5b568de362f - torchvision version: 0.16.0.dev20230716 Co-authored-by: Roll PyTorch Action <[email protected]>
…rions where the input type is bool (llvm#2304)
- torch version: 2.1.0.dev20230717 - torch commit hash: c437a4b1e0da5c00c15c983fecfeedb81b2355f5 - torchvision version: 0.16.0.dev20230717 Co-authored-by: Roll PyTorch Action <[email protected]>
Add e2e support by add "tosa-to-scf"
…lvm#2309) In PyTorch, the `NumberType` is equal to `Union[int, float, complex]`. However, the abstract interpretation library was treating the `NumberType` as `Union[int, float]`, resulting in type mismatches when reifying certain dtype functions. This commit fixes the type inconsistency by having the abstract interpretation functions take as an input a `Union[int, float, complex]` for the ops that take `!torch.number` inputs.
This can happen when the input comes from an unsupported operator
- torch version: 2.1.0.dev20230718 - torch commit hash: 5e128c4fa1f1217e30c7179aeb5eb5eb95d4dd70 - torchvision version: 0.16.0.dev20230718 Co-authored-by: Roll PyTorch Action <[email protected]>
* explicit inliner extension * fixed import formatting
- torch version: 2.1.0.dev20230719 - torch commit hash: 82e03ad95768645f27100929366530f5d62deffe - torchvision version: 0.16.0.dev20230719 Co-authored-by: Roll PyTorch Action <[email protected]>
[torch-dialect] fix torch.type_as op's folder by decomposing it to prim.dtype + aten.to_dtype
* RecomposeComplexOps: Remove dead slice op * lib/Dialect/Torch/IR/TorchOps.cpp: Fold slice ops even when they are on non-value tensors * lib/Conversion/TorchToTosa/TorchToTosa.cpp: Fix slice start/end out of range/none * lib/Dialect/Torch/IR/TorchOps.cpp: AtenSliceTensorOp::fold: Fold slices that go from 0:int_max * More tests for aten.split.Tensor
- torch version: 2.1.0.dev20230720 - torch commit hash: a16c87a767b22dbfa9e9435b1efe699db377ebf5 - torchvision version: 0.16.0.dev20230720 Co-authored-by: Roll PyTorch Action <[email protected]>
The implementation at this place was a remnent of the times the pipeline was run only once. Rely instead on the backend verification, after optimizations have had an opportunity to resolve some uncertainties. (e.g. `!torch.optional`).
It's actually fine to not check the rank of the indices, because the conversion anyways flattens the index tensor to be (1, numElements) before applying tosa::gather, and then anyways reshapes the output tensor to the output shape of the aten.embedding.
- torch version: 2.1.0.dev20230721 - torch commit hash: f228c8b8cac3db634516c7101dee077cbaa026ab - torchvision version: 0.16.0.dev20230721 Co-authored-by: Roll PyTorch Action <[email protected]>
- torch version: 2.1.0.dev20230722 - torch commit hash: b5222f140da05e40ac90ff42bd1db6564343daff - torchvision version: 0.16.0.dev20230722 Co-authored-by: Roll PyTorch Action <[email protected]>
- torch version: 2.1.0.dev20230723 - torch commit hash: a060bf3cf05c09906e78d7299efc8184568ea2e1 - torchvision version: 0.16.0.dev20230723 Co-authored-by: Roll PyTorch Action <[email protected]>
- torch version: 2.1.0.dev20230724 - torch commit hash: ba1da8199b3077b77a78a78e7f0dad166435182f - torchvision version: 0.16.0.dev20230724 Co-authored-by: Roll PyTorch Action <[email protected]>
…lvm#2332) Doing `module.to('lazy')` only moves the module member tensors to the device if they are created with `self.register_buffer` or `self.register_parameter`. Since the `self.tensor` tensor in `Add_Module` test is currently not created using the `self.register_*` methods, it is not being moved from CPU to lazy device, which is causing the test to fail on LTC backend. This commit uses `self.register_buffer` to fix the test on LTC backend. This commit also seems to fix the test for torchdynamo.
…tatic op (llvm#2338) By the way, this PR also adds the missing shape function for aten.masked_select.
* Add support for AvgPool1d * Update AbstractInterpLibrary * support avgpool1d in linalg * refactored code * fix nit problem
I saw test failing when FileCheck wasn't already build
Signed-Off By: Vivek Khandelwal <[email protected]>
- torch version: 2.1.0.dev20230808 - torch commit hash: c01a41cdec4414d8853c8474ddcaf2bd6990e5c8 - torchvision version: 0.16.0.dev20230808 Co-authored-by: Roll PyTorch Action <[email protected]>
This commit updates the `llvm-project` and `mlir-hlo` submodules to commits: llvm-project: f580901 mlir-hlo: 503736d156c25022813c51cbdbe3b862d67a6916
Set PyTorch and TorchVision version to nightly release 2023-08-10. Signed-Off By: Vivek Khandelwal <[email protected]>
Signed-Off By: Vivek Khandelwal <[email protected]>
- torch version: 2.1.0.dev20230811 - torch commit hash: 422297f87fc25191bb392486c4bb8d25c4785d15 - torchvision version: 0.16.0.dev20230811 Co-authored-by: Roll PyTorch Action <[email protected]>
When using custom ops, sometimes PyTorch will insert namespaces to the abstract interpretation function name in the format: `__torch__.{namespace_1}.{namespace_2}...{op_name}`. The extra namespaces are not part of the abstract interpretation function name, so it needs to be removed before generating the library of MLIR snippets of abstract interpretation functions. This commit adds support for removing the namespace information.
- torch version: 2.1.0.dev20230812 - torch commit hash: c9397a7bc833cdfdf64aa023631ae5e1c7e9cee4 - torchvision version: 0.16.0.dev20230812 Co-authored-by: Roll PyTorch Action <[email protected]>
- torch version: 2.1.0.dev20230813 - torch commit hash: 3748ee4a8c4032dac08bd2de0ebf039ad22e0d1e - torchvision version: 0.16.0.dev20230813 Co-authored-by: Roll PyTorch Action <[email protected]>
- torch version: 2.1.0.dev20230814 - torch commit hash: 53551b5c87ca582d71d4bbaf82050d05c3c2f534 - torchvision version: 0.16.0.dev20230814 Co-authored-by: Roll PyTorch Action <[email protected]>
…nually generating it (llvm#2344) * [Torch Dialect] replace none-index in aten.Index.Tensor's param by manually generating it Co-authored-by: Jiawei Wu <[email protected]> Co-authored-by: Jianzhe Xiao <[email protected]> * minor typo fix * add new failed e2e tests for ltc * fix typo * Address comments * Add more e2e tests * add failed e2e tests for LTC * address comments * remove decomposition for AtenIndexTensorHackedTwinOp
- torch version: 2.1.0.dev20230815 - torch commit hash: e4d5143f8c73014521f44c3e9b46c642a300dd2f - torchvision version: 0.16.0.dev20230815 Co-authored-by: Roll PyTorch Action <[email protected]>
This commit updates the `llvm-project` and `mlir-hlo` submodules to commits: llvm-project: a3f2751 mlir-hlo: 97c7e4b4506c3a2441c923e592833f45da439009 Changes made: - Rename `getSuccessorEntryOperands` with `getEntrySuccessorOperands` and remove `operands` from `getSuccessorRegions` (https://reviews.llvm.org/D157506) - Make `TypeConverter` a `const` (https://reviews.llvm.org/D157601)
- torch version: 2.1.0.dev20230816 - torch commit hash: 3af011b858f5e5c40fd8e9d41fa7f31a928b3b47 - torchvision version: 0.16.0.dev20230816 Co-authored-by: Roll PyTorch Action <[email protected]>
- torch version: 2.1.0.dev20230817 - torch commit hash: 3522f2a7b7f73e928a8366cb7bd62ab3883dbe75 - torchvision version: 0.16.0.dev20230817 Co-authored-by: Roll PyTorch Action <[email protected]>
* [TOSA] Fix conversion for depthwise convolutions * Add e2e tests for depthwise and grouped convolutions Co-authored-by: Lucas Camphausen <[email protected]>
…m#2403) Sean has decided to move on to other ventures and has requested that I help him disengage by resuming top level accountability for the project.
- torch version: 2.1.0.dev20230819 - torch commit hash: 668af075012c0857053a7cdf7ca764bb3569c6f1 - torchvision version: 0.16.0.dev20230819 Co-authored-by: Roll PyTorch Action <[email protected]>
- torch version: 2.1.0.dev20230820 - torch commit hash: 4ce227bfb953d1f64c4d86cc913144ee2a210e57 - torchvision version: 0.16.0.dev20230820 Co-authored-by: Roll PyTorch Action <[email protected]>
* LTC/TorchMLIR multi-output operations support * Update torch-mlir jit lowering to support ops with dynamic number of outputs * Added support for aten::split_copy, aten::split_with_sizes_copy * Fix native function for aten::split; cleanup code * Fix TorchMlirTensorList lowering * Remove xfails
Per request from llvm#2403.
* [Stablehlo Dialect] fix lowering bn inference with mixed types * update
* [LTC] Add shape_inference_(add|uniform) * Add torch.multinomial op. * Update ods gen; add normal_functional and erfinv ops support * New TorchMLIR ops: clamp_min.Tensor, clamp_max.Tensor, xlogy, binary_cross_entropy, log_sigmoid_forward, sigmoid_backward, cosine_embedding_loss, scatter.reduce * Improve the shape inference logic of whereOp - Infer the result tensor according to the broadcasting semantics Signed-off-by: rahul shrivastava <[email protected]> * Added aten::sgn * Add shape inference logic for hardtanh_backward op * Added new Torch-MLIR ops Co-authored-by: GlebKazantaev <[email protected]> * Add support for elu lowering * Add support for elu_backward lowering * Support fmod, remainder, and floor_divide Emit generated op defs for the remainder.Tensor and fmod.Tensor Add shape inference impelementations for remainder.Scalar, fmod.Scalar and floor_divide.Tensor * Add shape inference logic for im2col - pytorch.nn.unfold gets decomposed into im2col Signed-off-by: rahul shrivastava <[email protected]> * Add aten::eye and aten::eye.m support * Add tracing for linalg_qr * Update GeneratedTorchOps.td * Update xfails * Fix unbound variable issue in torch_ods_gen --------- Signed-off-by: rahul shrivastava <[email protected]> Co-authored-by: Mark Browning <[email protected]> Co-authored-by: zihaoc-cerebras <[email protected]> Co-authored-by: rahul shrivastava <[email protected]> Co-authored-by: Gokul Ramakrishnan <[email protected]> Co-authored-by: glebk-cerebras <[email protected]> Co-authored-by: Behzad Abghari <[email protected]> Co-authored-by: Ahmed Elkoushy <[email protected]>
…ssing return sentence (llvm#2409)
This way, we can keep CI green without being forced to ignore _all_ errors that arise in stable PyTorch builds
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
mlir-tcp update with torch-mlir main branch