Skip to content

Torch-TensorRT v2.2.0

Compare
Choose a tag to compare
@narendasan narendasan released this 14 Feb 01:49

Dynamo Frontend for Torch-TensorRT, PyTorch 2.2, CUDA 12.1, TensorRT 8.6

Torch-TensorRT 2.2.0 targets PyTorch 2.2, CUDA 12.1 (builds for CUDA 11.8 are available via the PyTorch package index - https://download.pytorch.org/whl/cu118) and TensorRT 8.6. This release is the second major release of Torch-TensorRT as the default frontend has changed from TorchScript to Dynamo allowing for users to more easily control and customize the compiler in Python.

The dynamo frontend can support both JIT workflows through torch.compile and AOT workflows through torch.export + torch_tensorrt.compile. It targets the Core ATen Opset (https://pytorch.org/docs/stable/torch.compiler_ir.html#core-aten-ir) and currently has 82% coverage. Just like in Torchscript graphs will be partitioned based on the ability to map operators to TensorRT in addition to any graph surgery done in Dynamo.

Output Format

Through the Dynamo frontend, different output formats can be selected for AOT workflows via the output_format kwarg. The choices are torchscript where the resulting compiled module will be traced with torch.jit.trace, suitable for Pythonless deployments, exported_program a new serializable format for PyTorch models or finally if you would like to run further graph transformations on the resultant model, graph_module will return a torch.fx.GraphModule.

Multi-GPU Safety

To address a long standing source of overhead, single GPU systems will now operate without typical required device checks. This check can be re-added when multiple GPUs are available to the host process using torch_tensorrt.runtime.set_multi_device_safe_mode

# Enables Multi Device Safe Mode
torch_tensorrt.runtime.set_multi_device_safe_mode(True)

# Disables Multi Device Safe Mode [Default Behavior]
torch_tensorrt.runtime.set_multi_device_safe_mode(False)

# Enables Multi Device Safe Mode, then resets the safe mode to its prior setting
with torch_tensorrt.runtime.set_multi_device_safe_mode(True):
    ...

More information can be found here: https://pytorch.org/TensorRT/user_guide/runtime.html

Capability Validators

In the Dynamo frontend, tests can be written and associated with converters to dynamically enable or disable them based on conditions in the target graph.

For example, the convolution converter in dynamo only supports 1D, 2D, and 3D convolution. We can therefore create a lambda which given a convolution FX node can determine if the convolution is supported:

@dynamo_tensorrt_converter(
    torch.ops.aten.convolution.default, 
     capability_validator=lambda conv_node: conv_node.args[7] in ([0], [0, 0], [0, 0, 0])
)  # type: ignore[misc]
def aten_ops_convolution(
    ctx: ConversionContext,
    target: Target,
    args: Tuple[Argument, ...],
    kwargs: Dict[str, Argument],
    name: str,
) -> Union[TRTTensor, Sequence[TRTTensor]]:

In such a case where the Node is not supported, the node will be partitioned out and run in PyTorch.
All capability validators are run prior to partitioning, after the lowering phase.

More information on writing converters for the Dynamo frontend can be found here: https://pytorch.org/TensorRT/contributors/dynamo_converters.html

Breaking Changes

  • Dynamo (torch.export) is now the default frontend for Torch-TensorRT. The TorchScript and FX frontends are now in maintenance mode. Therefore any torch.nn.Modules or torch.fx.GraphModules provided to torch_tensorrt.compile will by default be exported using torch.export then compiled. This default can be overridden by setting the ir=[torchscript|fx] kwarg. Any bugs reported will first be attempted to be resolved in the dynamo stack before attempting other frontends however pull requests for additional functionally in the TorchScript and FX frontends from the community will still be accepted.

What's Changed

New Contributors

Full Changelog: v1.4.0...v2.2.0