Speedup ONNX with TensorRT

Insightface use ONNX model for inference, although it is fast enough by using cuda, but TensorRT can make it faster.

We can get the available providers from onnxruntime, and use TensorRT if available.

import onnxruntime as ort
import torch

logger = logging.getLogger(__name__)

def get_available_providers() -> list[str]:
    available_providers = ort.get_available_providers()
    logger.info(f"Available ONNX Runtime providers: {available_providers}")

    match (
        torch.cuda.is_available(),
        "TensorrtExecutionProvider" in available_providers,
    ):
        case (True, True):
            logger.info("Using TensorRT provider for optimal performance")
            return ["TensorrtExecutionProvider"]
        case (True, False):
            logger.info("Using CUDA provider (TensorRT not available)")
            return ["CUDAExecutionProvider"]
        case _:
            logger.info("CUDA not available, using CPU provider")
            return ["CPUExecutionProvider"]

When running the program, we may encounter the following error:

[E:onnxruntime:Default, provider_bridge_ort.cc:2022 TryGetProviderInfo_TensorRT] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1695 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_tensorrt.so with error: libnvinfer.so.10: cannot open shared object file: No such file or directory

After checking the .venv directory, the libnvinfer.so.10 is under lib/python3.13/site-packages/tensorrt_libs/, but the libonnxruntime_providers_tensorrt.so is under lib/python3.13/site-packages/onnxruntime/, so we need to add the libnvinfer.so.10 to the LD_LIBRARY_PATH.

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:path/to/tensorrt_libs

Now it works. We can see Applied providers: ['TensorrtExecutionProvider', 'CPUExecutionProvider'] in the log.