-
Notifications
You must be signed in to change notification settings - Fork 511
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Tcp] Merge main into mlir-tcp #2518
Merged
navahgar
merged 42 commits into
llvm:mlir-tcp
from
navahgar:raghavanr/torch-mlir-upgrade
Oct 18, 2023
Merged
[Tcp] Merge main into mlir-tcp #2518
navahgar
merged 42 commits into
llvm:mlir-tcp
from
navahgar:raghavanr/torch-mlir-upgrade
Oct 18, 2023
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Set PyTorch and TorchVision version to nightly release 2023-09-18. Signed-Off By: Vivek Khandelwal <[email protected]>
Adds ODS for `avg_pool2d` and `avg_pool3d`, including their backward and `adaptive_` variants.
MSVC (and other compilers with implicit narrowing warnings) don't like this type mismatch.
Set PyTorch and TorchVision version to nightly release 2023-09-22. Signed-Off By: Vivek Khandelwal <[email protected]>
While trying to fix a bug in the `ConvertAtenViewOp` pattern in the linalg backend, I realized that the pattern had become quite complex and had accumulated some dead code, making it hard to reason about. This commit simplifies the pattern quite a bit. The main changes are: 1. All the static helper functions in the `ConvertAtenViewOp` class have been simplified, both in their signature and their body. Each one now performs simple calculations on arrays, and take the least number of arguments necessary. 2. The body of [the `while` loop](https://github.com/ramiro050/torch-mlir/blob/9fce566b0cb64ff2b198693d1f6ee9580b8fa01f/lib/Conversion/TorchToLinalg/DataMovement.cpp#L407) inside the main pattern has been changed to work on `MutableArrayRef` slices, to avoid having to keep track of `start` and `end` indices for the input and output shape arrays. 3. All the heuristics used to determine the mapping between the input and output dimensions are now in [this relatively short `if-else` section](https://github.com/ramiro050/torch-mlir/blob/9fce566b0cb64ff2b198693d1f6ee9580b8fa01f/lib/Conversion/TorchToLinalg/DataMovement.cpp#L428-L460), making it easy to see what is going on. 4. Dead code was eliminated + updates to some of the documentation comments This commit does not add any new functionality to the `ConvertAtenViewOp` pattern.
Set PyTorch and TorchVision version to nightly release 2023-09-26. aten._convolution.deprecated changes done because upstream PyTorch has now added support for fp16 native convolution on CPU. Refer: pytorch/pytorch@7c90521 Signed-Off By: Vivek Khandelwal <[email protected]>
torch version: 2.2.0.dev20230927 torch commit hash: d7520d8668dc08f7bed27a64f006c909006e653a torchvision version: 0.17.0.dev20230927 Co-authored-by: Roll PyTorch Action <[email protected]>
The LTC backend has drifted from being able to pass tests on the stable PyTorch version, so pinning to nightly on ARM. Signed-Off By: Vivek Khandelwal <[email protected]>
…lvm#2496) When importing dynamic shaped programs from Dynamo, via torch.compile or torch.export, we can assume that strict symbolic shape checks have been done prior to generating torch IR. Among other shape checking, this eliminates the case where an unknown dimension can be dynamically '1' in a way that signals a broadcast. Adds a `isAssumingStrictSymbolicShapes` utility which consults a `torch.assume_strict_symbolic_shapes` attribute on an enclosing scope and returns true if present. In the linalg pipeline, many runtime checks are elided when this returns true.
Set PyTorch and TorchVision version to nightly release 2023-09-28. aten.baddbmm changes done because upstream PyTorch has now added support for fp16 gemm on CPU. Refer: pytorch/pytorch@9399e0b
Signed-Off By: Vivek Khandelwal <[email protected]>
…ar op Signed-Off By: Vivek Khandelwal <[email protected]>
torch version: 2.2.0.dev20231002 torch commit hash: 4dae8b49630d2784f6a5d8726db30923e2d1e077 torchvision version: 0.17.0.dev20231002 Co-authored-by: Roll PyTorch Action <[email protected]>
Also, revert llvm#2488. Disabling LTC based on the discussion here: https://discord.com/channels/636084430946959380/742573221882364009/1156272667813494824
Add linspace/cumprod/roll ops to ODS and add shape inference functions to make it work with LTC. Also, add some tensor utils to LTC library for searching for non-detach copy nodes.
…d op Signed-Off By: Vivek Khandelwal <[email protected]>
torch version: 2.2.0.dev20231003 torch commit hash: 4e30fa82315208dcd38fa16a0ed9851fa8e98bc9 torchvision version: 0.17.0.dev20231003 Co-authored-by: Roll PyTorch Action <[email protected]>
This commit adds to the lowering of `aten.view` handling for the following cases: - `(..., a.size(i))` -> `(..., a.size(i), 1, ..., 1)` - `(..., a.size(i), 1, ..., 1)` -> `(..., a.size(i))` - `(a.size(i), ...)` -> `(1, ..., 1, a.size(i), ...)` - `(1, ..., 1, a.size(i), ...)` -> `(a.size(i), ...)`
torch version: 2.2.0.dev20231004 torch commit hash: 56af607c0437ed7321da4b96a4dbccdbd8b5a98b torchvision version: 0.17.0.dev20231004 Co-authored-by: Roll PyTorch Action <[email protected]>
Strict symbolic shapes allow us to assume numpy-style dynamic broadcasts never occur. This allows us to strengthen the folder for broadcasts to cases where the rank is the same and all shapes match (including dynamic sentinel values).
torch version: 2.2.0.dev20231005 torch commit hash: 439cba92777ff61b49d24096edfaf128fbd742ea torchvision version: 0.17.0.dev20231005 Co-authored-by: Roll PyTorch Action <[email protected]>
…lvm#2505) With strict symbolic shapes, we can assume numpy-style dynamic broadcasts never occur. This improves the lowering in the presence of this assumption.
torch version: 2.2.0.dev20231006 torch commit hash: 20217d1426d99d0caa70e1473d89e0c834b7f35e torchvision version: 0.17.0.dev20231006 Co-authored-by: Roll PyTorch Action <[email protected]>
) Add aten.unflatten.int op Add its torch-to-tosa lowering Update the TorchToTosa/basic.mlir tests To test e2e tosa lowering: `python -m e2e_testing.main -v -c=tosa` --------- Co-authored-by: Ze Zhang <[email protected]>
Add aten.isclose op Add its torch-to-tosa lowering Update the TorchToTosa/basic.mlir tests To test e2e tosa lowering: `python -m e2e_testing.main -v -c=tosa` --------- Co-authored-by: Ze Zhang <[email protected]>
The main purpose is to bring in the new mesh dialect change. llvm/llvm-project#68007
As titled. --------- Co-authored-by: Ze Zhang <[email protected]>
The last llvm bump in llvm#2511 pointed to llvm/llvm-project@b44b349, however the bazel build upstream was not clean at this point: ``` ERROR: /root/.cache/bazel/_bazel_root/b89349c08f7224396763d14fe35cba11/external/llvm-project/mlir/BUILD.bazel:5837:18: TdGenerate external/llvm-project/mlir/include/mlir/Dialect/LLVMIR/NVVMOpsInterface.h.inc failed: (Exit 1): mlir-tblgen failed: error executing command ... external/llvm-project/mlir/include/mlir/Dialect/LLVMIR/NVVMOps.td:20:9: error: Could not find include file 'mlir/Dialect/LLVMIR/BasicPtxBuilderInterface.td' include "mlir/Dialect/LLVMIR/BasicPtxBuilderInterface.td" ^ external/llvm-project/mlir/include/mlir/Dialect/LLVMIR/NVVMOps.td:20:9: error: Unexpected token at top level include "mlir/Dialect/LLVMIR/BasicPtxBuilderInterface.td" ^ ``` The bazel fixes followed in a subsequent commit at llvm/llvm-project@28b27c1. This PR bumps LLVM by a few more commits (to include the bazel fixes) which helps restore Torch-MLIR's bazel build back to 🟢 . GHA workflow to test bazel build: https://github.com/sjain-stanford/torch-mlir/actions/runs/6555101471/job/17803082508
navahgar
force-pushed
the
raghavanr/torch-mlir-upgrade
branch
from
October 18, 2023 05:53
bd1f66c
to
b846437
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
As titled