Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update CUDA python requirements with onnxruntime 1.19 update #170

Merged
merged 5 commits into from
Aug 23, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .github/workflows/build-executable.yml
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,11 @@ jobs:
run: cp ./server/{force_gpu_clocks.bat,reset_gpu_clocks.bat} ./server/dist/
shell: bash
if: matrix.os == 'windows-latest' && matrix.backend == 'cuda'
- name: Add CUDA library symlinks
run: ln -svf nvidia/*/lib/*.so* .
shell: bash
if: matrix.os == 'ubuntu-20.04' && matrix.backend == 'cuda'
working-directory: ./server/dist/MMVCServerSIO/_internal
- name: Pack artifact
shell: bash
run: |
Expand Down
5 changes: 5 additions & 0 deletions .github/workflows/make-release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -122,6 +122,11 @@ jobs:
run: cp ./server/{force_gpu_clocks.bat,reset_gpu_clocks.bat} ./server/dist/
shell: bash
if: matrix.os == 'windows-latest' && matrix.backend == 'cuda'
- name: Add CUDA library symlinks
run: ln -svf nvidia/*/lib/*.so* .
shell: bash
if: matrix.os == 'ubuntu-20.04' && matrix.backend == 'cuda'
working-directory: ./server/dist/MMVCServerSIO/_internal
- name: Pack artifact
shell: bash
run: |
Expand Down
1 change: 1 addition & 0 deletions server/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
# Reset CUDA_PATH since all libraries are already bundled.
# Existing CUDA installations may be incompatible with PyTorch or ONNX runtime
os.environ['CUDA_PATH'] = ''
os.environ['CUDNN_PATH'] = ''
# Fix high CPU usage caused by faiss-cpu for AMD CPUs.
# https://github.com/facebookresearch/faiss/issues/53#issuecomment-288351188
os.environ['OMP_WAIT_POLICY'] = 'PASSIVE'
Expand Down
7 changes: 3 additions & 4 deletions server/requirements-cuda.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,9 @@
# # wget https://repo.anaconda.com/archive/Anaconda3-2022.10-Linux-x86_64.sh
# # bash Anaconda3-2022.10-Linux-x86_64.sh

# PyPI onnxruntime-gpu is compiled with CUDA 11.x
--extra-index-url https://download.pytorch.org/whl/cu118
# torch 2.4.0 has problems with Linux builds
torch==2.3.1
# PyPI onnxruntime-gpu>=1.19 is compiled with CUDA 12.x and cuDNN 9.x
--extra-index-url https://download.pytorch.org/whl/cu121
torch>=2.4.0
torchaudio
faiss-cpu==1.8.0; sys_platform!='linux'
faiss-gpu; sys_platform=='linux'
Expand Down
2 changes: 1 addition & 1 deletion server/requirements-dml.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
# # wget https://repo.anaconda.com/archive/Anaconda3-2022.10-Linux-x86_64.sh
# # bash Anaconda3-2022.10-Linux-x86_64.sh

torch==2.3.1 # torch-directml-0.2.2.dev240614 supports up to to 2.3.1
torch==2.3.1 # torch-directml-0.2.4.dev240815 supports torch up to to 2.3.1
torchaudio
torch-directml
faiss-cpu==1.8.0
Expand Down
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
import numpy as np
import torch
import onnxruntime
from const import PitchExtractorType, F0_MIN, F0_MAX
from voice_changer.common.deviceManager.DeviceManager import DeviceManager
from voice_changer.RVC.pitchExtractor.PitchExtractor import PitchExtractor
import onnxruntime
from voice_changer.RVC.pitchExtractor import onnxcrepe


Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
import numpy as np
import onnxruntime
import torch
import onnxruntime
from const import PitchExtractorType
from voice_changer.RVC.pitchExtractor.PitchExtractor import PitchExtractor
from voice_changer.common.deviceManager.DeviceManager import DeviceManager
Expand Down
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
import numpy as np
import torch
import onnxruntime
from const import PitchExtractorType
from voice_changer.common.OnnxLoader import load_onnx_model
from voice_changer.RVC.pitchExtractor.PitchExtractor import PitchExtractor
from voice_changer.common.deviceManager.DeviceManager import DeviceManager
from voice_changer.common.MelExtractor import MelSpectrogram
import onnxruntime
import torch

class RMVPEOnnxPitchExtractor(PitchExtractor):

Expand Down
Loading