-
Notifications
You must be signed in to change notification settings - Fork 511
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Tcp] Merge main into mlir-tcp #2518
[Tcp] Merge main into mlir-tcp #2518
Commits on Sep 14, 2023
-
Configuration menu - View commit details
-
Copy full SHA for 7a7be60 - Browse repository at this point
Copy the full SHA 7a7be60View commit details
Commits on Sep 19, 2023
-
build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-09-18. Signed-Off By: Vivek Khandelwal <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for b03efdf - Browse repository at this point
Copy the full SHA b03efdfView commit details -
Bump llvm-project to f66cd9e. (llvm#2466)
Picks up DenseResourceElementsAttr python support and fixes minf/maxf C++ rename.
Configuration menu - View commit details
-
Copy full SHA for 278c41e - Browse repository at this point
Copy the full SHA 278c41eView commit details
Commits on Sep 20, 2023
-
Configuration menu - View commit details
-
Copy full SHA for 20ea1c9 - Browse repository at this point
Copy the full SHA 20ea1c9View commit details -
[Torch Dialect] add avg_pool 2d and 3d op variants (llvm#2473)
Adds ODS for `avg_pool2d` and `avg_pool3d`, including their backward and `adaptive_` variants.
Configuration menu - View commit details
-
Copy full SHA for 023fc90 - Browse repository at this point
Copy the full SHA 023fc90View commit details -
Fixing implicit double to float casts. (llvm#2476)
MSVC (and other compilers with implicit narrowing warnings) don't like this type mismatch.
Configuration menu - View commit details
-
Copy full SHA for b9847b1 - Browse repository at this point
Copy the full SHA b9847b1View commit details
Commits on Sep 21, 2023
-
Configuration menu - View commit details
-
Copy full SHA for 059041e - Browse repository at this point
Copy the full SHA 059041eView commit details
Commits on Sep 22, 2023
-
build: manually update PyTorch version (llvm#2480)
Set PyTorch and TorchVision version to nightly release 2023-09-22. Signed-Off By: Vivek Khandelwal <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 6699cbc - Browse repository at this point
Copy the full SHA 6699cbcView commit details
Commits on Sep 23, 2023
-
Configuration menu - View commit details
-
Copy full SHA for 5f772e8 - Browse repository at this point
Copy the full SHA 5f772e8View commit details
Commits on Sep 25, 2023
-
[MLIR][TORCH] Add device "cpu" support for aten.to.dtype_layout op (l…
Configuration menu - View commit details
-
Copy full SHA for a520d39 - Browse repository at this point
Copy the full SHA a520d39View commit details
Commits on Sep 26, 2023
-
[NFC] Clean-up
ConvertAtenViewOp
in linalg backend (llvm#2470)While trying to fix a bug in the `ConvertAtenViewOp` pattern in the linalg backend, I realized that the pattern had become quite complex and had accumulated some dead code, making it hard to reason about. This commit simplifies the pattern quite a bit. The main changes are: 1. All the static helper functions in the `ConvertAtenViewOp` class have been simplified, both in their signature and their body. Each one now performs simple calculations on arrays, and take the least number of arguments necessary. 2. The body of [the `while` loop](https://github.com/ramiro050/torch-mlir/blob/9fce566b0cb64ff2b198693d1f6ee9580b8fa01f/lib/Conversion/TorchToLinalg/DataMovement.cpp#L407) inside the main pattern has been changed to work on `MutableArrayRef` slices, to avoid having to keep track of `start` and `end` indices for the input and output shape arrays. 3. All the heuristics used to determine the mapping between the input and output dimensions are now in [this relatively short `if-else` section](https://github.com/ramiro050/torch-mlir/blob/9fce566b0cb64ff2b198693d1f6ee9580b8fa01f/lib/Conversion/TorchToLinalg/DataMovement.cpp#L428-L460), making it easy to see what is going on. 4. Dead code was eliminated + updates to some of the documentation comments This commit does not add any new functionality to the `ConvertAtenViewOp` pattern.
Configuration menu - View commit details
-
Copy full SHA for c9fd789 - Browse repository at this point
Copy the full SHA c9fd789View commit details -
Configuration menu - View commit details
-
Copy full SHA for ff7f8b2 - Browse repository at this point
Copy the full SHA ff7f8b2View commit details
Commits on Sep 27, 2023
-
build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-09-26. aten._convolution.deprecated changes done because upstream PyTorch has now added support for fp16 native convolution on CPU. Refer: pytorch/pytorch@7c90521 Signed-Off By: Vivek Khandelwal <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 7760bda - Browse repository at this point
Copy the full SHA 7760bdaView commit details -
update PyTorch version to 2.2.0.dev20230927 (llvm#2489)
torch version: 2.2.0.dev20230927 torch commit hash: d7520d8668dc08f7bed27a64f006c909006e653a torchvision version: 0.17.0.dev20230927 Co-authored-by: Roll PyTorch Action <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for e69266a - Browse repository at this point
Copy the full SHA e69266aView commit details -
[linalg] Fix handling of trailing size-1 dimensions in aten.view (llv…
Configuration menu - View commit details
-
Copy full SHA for 7c6b9d2 - Browse repository at this point
Copy the full SHA 7c6b9d2View commit details -
Use PyTorch nightly for Arm release build (llvm#2488)
The LTC backend has drifted from being able to pass tests on the stable PyTorch version, so pinning to nightly on ARM. Signed-Off By: Vivek Khandelwal <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 8abfa5b - Browse repository at this point
Copy the full SHA 8abfa5bView commit details
Commits on Sep 28, 2023
-
Configuration menu - View commit details
-
Copy full SHA for 4e1dd3b - Browse repository at this point
Copy the full SHA 4e1dd3bView commit details
Commits on Sep 29, 2023
-
Elide dynamic broadcast checks when in strict symbolic shapes mode. (l…
…lvm#2496) When importing dynamic shaped programs from Dynamo, via torch.compile or torch.export, we can assume that strict symbolic shape checks have been done prior to generating torch IR. Among other shape checking, this eliminates the case where an unknown dimension can be dynamically '1' in a way that signals a broadcast. Adds a `isAssumingStrictSymbolicShapes` utility which consults a `torch.assume_strict_symbolic_shapes` attribute on an enclosing scope and returns true if present. In the linalg pipeline, many runtime checks are elided when this returns true.
Configuration menu - View commit details
-
Copy full SHA for 860be09 - Browse repository at this point
Copy the full SHA 860be09View commit details
Commits on Oct 2, 2023
-
build: manually update PyTorch version
Set PyTorch and TorchVision version to nightly release 2023-09-28. aten.baddbmm changes done because upstream PyTorch has now added support for fp16 gemm on CPU. Refer: pytorch/pytorch@9399e0b
Configuration menu - View commit details
-
Copy full SHA for 71ac62f - Browse repository at this point
Copy the full SHA 71ac62fView commit details -
[MLIR][TORCH] Add support for conversion to int8 dtype
Signed-Off By: Vivek Khandelwal <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for c434736 - Browse repository at this point
Copy the full SHA c434736View commit details -
[MLIR][TORCH] Add support for bitwise_right_shit and bitwise_and.Scal…
…ar op Signed-Off By: Vivek Khandelwal <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 9293326 - Browse repository at this point
Copy the full SHA 9293326View commit details -
update PyTorch version to 2.2.0.dev20231002 (llvm#2497)
torch version: 2.2.0.dev20231002 torch commit hash: 4dae8b49630d2784f6a5d8726db30923e2d1e077 torchvision version: 0.17.0.dev20231002 Co-authored-by: Roll PyTorch Action <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for b75c208 - Browse repository at this point
Copy the full SHA b75c208View commit details -
Also, revert llvm#2488. Disabling LTC based on the discussion here: https://discord.com/channels/636084430946959380/742573221882364009/1156272667813494824
Configuration menu - View commit details
-
Copy full SHA for d10a86f - Browse repository at this point
Copy the full SHA d10a86fView commit details
Commits on Oct 3, 2023
-
Add linspace/cumprod/roll ops (llvm#2498)
Add linspace/cumprod/roll ops to ODS and add shape inference functions to make it work with LTC. Also, add some tensor utils to LTC library for searching for non-detach copy nodes.
Configuration menu - View commit details
-
Copy full SHA for 32d9b20 - Browse repository at this point
Copy the full SHA 32d9b20View commit details -
[MLIR][TORCH] Add support for int8 dtype for sub, add, and bitwise_an…
…d op Signed-Off By: Vivek Khandelwal <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for ca6ce89 - Browse repository at this point
Copy the full SHA ca6ce89View commit details -
update PyTorch version to 2.2.0.dev20231003 (llvm#2500)
torch version: 2.2.0.dev20231003 torch commit hash: 4e30fa82315208dcd38fa16a0ed9851fa8e98bc9 torchvision version: 0.17.0.dev20231003 Co-authored-by: Roll PyTorch Action <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 4892ed4 - Browse repository at this point
Copy the full SHA 4892ed4View commit details -
Configuration menu - View commit details
-
Copy full SHA for 1c508af - Browse repository at this point
Copy the full SHA 1c508afView commit details -
[linalg] Add handling for leadin and trailing size-1 dims in ViewOp
This commit adds to the lowering of `aten.view` handling for the following cases: - `(..., a.size(i))` -> `(..., a.size(i), 1, ..., 1)` - `(..., a.size(i), 1, ..., 1)` -> `(..., a.size(i))` - `(a.size(i), ...)` -> `(1, ..., 1, a.size(i), ...)` - `(1, ..., 1, a.size(i), ...)` -> `(a.size(i), ...)`
Configuration menu - View commit details
-
Copy full SHA for 2e5d650 - Browse repository at this point
Copy the full SHA 2e5d650View commit details
Commits on Oct 4, 2023
-
update PyTorch version to 2.2.0.dev20231004 (llvm#2502)
torch version: 2.2.0.dev20231004 torch commit hash: 56af607c0437ed7321da4b96a4dbccdbd8b5a98b torchvision version: 0.17.0.dev20231004 Co-authored-by: Roll PyTorch Action <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 14e6da8 - Browse repository at this point
Copy the full SHA 14e6da8View commit details
Commits on Oct 5, 2023
-
Improve aten.broadcast_to folder when in strict symbol mode (llvm#2504)
Strict symbolic shapes allow us to assume numpy-style dynamic broadcasts never occur. This allows us to strengthen the folder for broadcasts to cases where the rank is the same and all shapes match (including dynamic sentinel values).
Configuration menu - View commit details
-
Copy full SHA for ae72eec - Browse repository at this point
Copy the full SHA ae72eecView commit details -
update PyTorch version to 2.2.0.dev20231005 (llvm#2506)
torch version: 2.2.0.dev20231005 torch commit hash: 439cba92777ff61b49d24096edfaf128fbd742ea torchvision version: 0.17.0.dev20231005 Co-authored-by: Roll PyTorch Action <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 42b6c0a - Browse repository at this point
Copy the full SHA 42b6c0aView commit details -
[TorchToLinalg] Improve broadcast lowerings in strict symbolic modes (l…
…lvm#2505) With strict symbolic shapes, we can assume numpy-style dynamic broadcasts never occur. This improves the lowering in the presence of this assumption.
Configuration menu - View commit details
-
Copy full SHA for 6f81ad7 - Browse repository at this point
Copy the full SHA 6f81ad7View commit details
Commits on Oct 6, 2023
-
update PyTorch version to 2.2.0.dev20231006 (llvm#2507)
torch version: 2.2.0.dev20231006 torch commit hash: 20217d1426d99d0caa70e1473d89e0c834b7f35e torchvision version: 0.17.0.dev20231006 Co-authored-by: Roll PyTorch Action <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 26ea13d - Browse repository at this point
Copy the full SHA 26ea13dView commit details
Commits on Oct 10, 2023
-
Configuration menu - View commit details
-
Copy full SHA for 9b5a4af - Browse repository at this point
Copy the full SHA 9b5a4afView commit details
Commits on Oct 14, 2023
-
Add aten.unflatten.int support and its torch-to-tosa lowering (llvm#2509
) Add aten.unflatten.int op Add its torch-to-tosa lowering Update the TorchToTosa/basic.mlir tests To test e2e tosa lowering: `python -m e2e_testing.main -v -c=tosa` --------- Co-authored-by: Ze Zhang <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for e649e06 - Browse repository at this point
Copy the full SHA e649e06View commit details
Commits on Oct 16, 2023
-
Add aten.isclose support and its torch-to-tosa lowering (llvm#2512)
Add aten.isclose op Add its torch-to-tosa lowering Update the TorchToTosa/basic.mlir tests To test e2e tosa lowering: `python -m e2e_testing.main -v -c=tosa` --------- Co-authored-by: Ze Zhang <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for f2c53b8 - Browse repository at this point
Copy the full SHA f2c53b8View commit details
Commits on Oct 17, 2023
-
Update llvm-project to b44b349 (llvm#2511)
The main purpose is to bring in the new mesh dialect change. llvm/llvm-project#68007
Chi_Liu authoredOct 17, 2023 Configuration menu - View commit details
-
Copy full SHA for 14a4da9 - Browse repository at this point
Copy the full SHA 14a4da9View commit details -
update AtenClampOp in torch-to-tosa to handle fp inputs (llvm#2516)
As titled. --------- Co-authored-by: Ze Zhang <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 4279b75 - Browse repository at this point
Copy the full SHA 4279b75View commit details
Commits on Oct 18, 2023
-
Bump LLVM to get bazel fixes (llvm#2517)
The last llvm bump in llvm#2511 pointed to llvm/llvm-project@b44b349, however the bazel build upstream was not clean at this point: ``` ERROR: /root/.cache/bazel/_bazel_root/b89349c08f7224396763d14fe35cba11/external/llvm-project/mlir/BUILD.bazel:5837:18: TdGenerate external/llvm-project/mlir/include/mlir/Dialect/LLVMIR/NVVMOpsInterface.h.inc failed: (Exit 1): mlir-tblgen failed: error executing command ... external/llvm-project/mlir/include/mlir/Dialect/LLVMIR/NVVMOps.td:20:9: error: Could not find include file 'mlir/Dialect/LLVMIR/BasicPtxBuilderInterface.td' include "mlir/Dialect/LLVMIR/BasicPtxBuilderInterface.td" ^ external/llvm-project/mlir/include/mlir/Dialect/LLVMIR/NVVMOps.td:20:9: error: Unexpected token at top level include "mlir/Dialect/LLVMIR/BasicPtxBuilderInterface.td" ^ ``` The bazel fixes followed in a subsequent commit at llvm/llvm-project@28b27c1. This PR bumps LLVM by a few more commits (to include the bazel fixes) which helps restore Torch-MLIR's bazel build back to 🟢 . GHA workflow to test bazel build: https://github.com/sjain-stanford/torch-mlir/actions/runs/6555101471/job/17803082508
Configuration menu - View commit details
-
Copy full SHA for 52abae1 - Browse repository at this point
Copy the full SHA 52abae1View commit details -
Configuration menu - View commit details
-
Copy full SHA for 86cf909 - Browse repository at this point
Copy the full SHA 86cf909View commit details -
Configuration menu - View commit details
-
Copy full SHA for b846437 - Browse repository at this point
Copy the full SHA b846437View commit details -
Configuration menu - View commit details
-
Copy full SHA for 9624268 - Browse repository at this point
Copy the full SHA 9624268View commit details