Sign In

How to Resolve DWPose/Onnxruntime Warning (or How to Run Multiple PyTorch Versions Side-by-Side)

How to Resolve DWPose/Onnxruntime Warning (or How to Run Multiple PyTorch Versions Side-by-Side)

(Update 20231210: Thanks to user axicec for resolving the issues for ComfyUI portable installations, ie. if you've installed the portable version of ComfyUI, you'll need to use the embedded python for all of the python/pip related commands. The embedded python is available at python_embeded\python.exe in the portable ComfyUI directory.)

(Update 20231126: As noted by Reddit user u/benzebut0, and confirmed in my testing, we can change PyTorch versions without installing the full CUDA toolkit, as long as you have the correct NVIDIA GPU driver. The below guide has been updated accordingly.)

Introduction

If you're here, then you have probably seen this warning in your terminal window from ComfyUI with comfyui_controlnet_aux installed and, chances are, you didn't find much information on how to resolve it.

C:\path\to\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\dwpose.py:26: UserWarning:Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly

So what does this error mean, why am I getting it, and more importantly, how do I resolve it?

Well, typically this means a certain Python library (onnxruntime-gpu in our case) requires a different version of PyTorch/CUDA (v11.8) than what you have installed on your system (v12.x most likely). Hopefully, this article will help guide you through setting up and running multiple versions of PyTorch/CUDA on your machine side-by-side using virtual environments, specifically to resolve this issue for ComfyUI's ControlNet Auxiliary Preprocessors node.

Understanding the Issue

The error message mentioned above usually means DWPose, a Deep Learning model, and more specifically, a Controlnet preprocessor for OpenPose within ComfyUI's ControlNet Auxiliary Preprocessors, doesn't support the CUDA version installed on your machine.

For example, if you run the following command in a PowerShell window:

python -c "import torch; print(torch.__version__); print(torch.version.cuda)"

you might see something like:

2.1.1+cu121
12.1

(Note: my PyTorch/CUDA Version is 12.1)

However, ONNX Runtime's documentation reveals the latest supported CUDA version is 11.8 (at the time of this writing). So if you're using a 12.x version or higher like me, then we basically have 2 options: downgrade our system to PyTorch/CUDA 11.8 to support this specific library or wait until ONNX Runtime releases an updated version compatible with 12.x CUDA.

Fortunately, by utilizing Python virtual environments (venv), we can keep our existing 12.x PyTorch/CUDA and install 11.8 specifically for our ComfyUI environment by running multiple PyTorch versions concurrently.

Setting Up a Python Virtual Environment

Before diving in head first, we need to make sure we're working within a Python virtual environment, however feel free to skip to Installing PyTorch for CUDA 11.8 if you already have one setup for ComfyUI. A virtual environment will let us manage dependencies for specific projects without affecting global Python settings. Here's how to set it up on Windows:

  1. Install Python (you can download it from the official website)

  2. Open PowerShell and navigate to your project directory

    cd path\to\ComfyUI
  3. Create a Virtual Environment:

    python -m venv myenv

    You can replace myenv with any name you want for your virtual environment.

  4. Activate the Environment: Within the same directory, run the following command:

    .\myenv\Scripts\activate

    Your command prompt should now indicate that you're in the virtual environment.

Installing PyTorch for CUDA 11.8

We can now install PyTorch built for the specific CUDA version we need to support our ComfyUI requirements. Run the following commands in the same Powershell window/directory:

  1. Uninstall Current PyTorch Version: (skip this if first time setting up virtual environment)

    pip uninstall torch torchvision torchaudio
  2. Install PyTorch for CUDA 11.8:

    pip install torch==2.1.1+cu118 torchvision==0.16.1+cu118 torchaudio==2.1.1+cu118 -f https://download.pytorch.org/whl/torch_stable.html
  3. Verify PyTorch Installation:

    python -c "import torch; print(torch.__version__); print(torch.version.cuda)"

    If everything worked correctly, you should see the following print in the terminal window:

    2.1.1+cu118
    11.8

Note: if you've installed the portable version of ComfyUI, you'll need to use the embedded python/pip to uninstall/install PyTorch. The embedded python is available at python_embeded\python.exe and steps 1-3 can be accomplished with the following commands:

python_embeded\python.exe -m pip uninstall torch torchvision torchaudio
python_embeded\python.exe -m pip install torch==2.1.1+cu118 torchvision==0.16.1+cu118 torchaudio==2.1.1+cu118 -f https://download.pytorch.org/whl/torch_stable.html
python_embeded\python.exe -c "import torch; print(torch.__version__); print(torch.version.cuda)"

Installing ONNX Runtime

Finally, now that we have the right environment & dependencies, we can install onnxruntime-gpu:

pip install onnxruntime-gpu

(or if you've installed the portable version ComfyUI)

python_embeded\python.exe -m pip install onnxruntime-gpu

...and to verify everything is working correctly, run ComfyUI and observe the following terminal output:

DWPose: Onnxruntime with acceleration providers detected

(Congratulations if you followed along and made it this far 🎉)

Conclusion

By following the above steps, you can successfully run PyTorch/CUDA 11.8 side-by-side with PyTorch/CUDA 12.x on your local machine, ensuring compatibility with the DWPose Controlnet Preprocessor, and speeding up those renders 😎.

If/when ONNX Runtime supports CUDA 12.x, we can simply uninstall PyTorch for 11.8 and then install PyTorch for 12.x in our virtual environments and we should be good to go. Hopefully this also sheds some light on how working within a virtual environment can help maintain project-specific dependencies without affecting your global Python environment & setup.

15

Comments