Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: failed to load voice "ja" #323

Open
zachysaur opened this issue Nov 1, 2024 · 9 comments
Open

RuntimeError: failed to load voice "ja" #323

zachysaur opened this issue Nov 1, 2024 · 9 comments

Comments

@zachysaur
Copy link

(venv) F:\maskgct\maskgct>python app.py
./models/tts/maskgct/g2p\sources\g2p_chinese_model\poly_bert_model.onnx
Error: Could not load the specified mbrola voice file.
Error: Could not load the specified mbrola voice file.
Traceback (most recent call last):
File "F:\maskgct\maskgct\app.py", line 20, in
from models.tts.maskgct.g2p.g2p_generation import g2p, chn_eng_g2p
File "F:\maskgct\maskgct\models\tts\maskgct\g2p\g2p_generation.py", line 10, in
from models.tts.maskgct.g2p.utils.g2p import phonemizer_g2p
File "F:\maskgct\maskgct\models\tts\maskgct\g2p\utils\g2p.py", line 30, in
phonemizer_ja = EspeakBackend(
File "F:\maskgct\maskgct\venv\lib\site-packages\phonemizer\backend\espeak\espeak.py", line 49, in init
self._espeak.set_voice(language)
File "F:\maskgct\maskgct\venv\lib\site-packages\phonemizer\backend\espeak\wrapper.py", line 249, in set_voice
raise RuntimeError( # pragma: nocover
RuntimeError: failed to load voice "ja"

(venv) F:\maskgct\maskgct>

@yuantuo666
Copy link
Collaborator

Hi, the MaskGCT is built in a Linux environment. For a better coding experience, it is recommended that Linux be used to reproduce.

For people who are having problems configuring the env on a Windows machine, you can try to follow this blog post: https://www.cnblogs.com/v3ucn/p/18511187

@zelenooki87
Copy link

zelenooki87 commented Nov 2, 2024

@zachysaur I had the same issue on Windows. problem solved by:
replacing phonemizer files from this fixed commit
https://github.com/bootphon/phonemizer/tree/b2db56adceef42b9a20c8ffb4d49868f630b88a1/phonemizer

After that if you got character unicode error, just turn on UTF-8 (BETA) language for non-unicode programs in regional and language settings
SNAG-0000

If you get mbrola dlls error, put those two files from zip to:
mbrola.zip

C:\Program Files (x86)\eSpeak\command_line

It should now work.

@zachysaur
Copy link
Author

Hi, the MaskGCT is built in a Linux environment. For a better coding experience, it is recommended that Linux be used to reproduce.

For people who are having problems configuring the env on a Windows machine, you can try to follow this blog post: https://www.cnblogs.com/v3ucn/p/18511187

still same error even after following everything on this blog
./models/tts/maskgct/g2p\sources\g2p_chinese_model\poly_bert_model.onnx
2024-11-03 08:38:00.9068680 [E:onnxruntime:Default, provider_bridge_ort.cc:1862 onnxruntime::TryGetProviderInfo_CUDA] D:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1539 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "F:\gct\Amphion\venv\Lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

2024-11-03 08:38:00.9208389 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:993 onnxruntime::python::CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Require cuDNN 9.* and CUDA 12.*, and the latest MSVC runtime. Please install all dependencies as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.
Error: Could not load the specified mbrola voice file.
Error: Could not load the specified mbrola voice file.
Traceback (most recent call last):
File "F:\gct\Amphion\1.py", line 1, in
from models.tts.maskgct.maskgct_utils import *
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\gct\Amphion\models\tts\maskgct\maskgct_utils.py", line 20, in
from models.tts.maskgct.g2p.g2p_generation import g2p, chn_eng_g2p
File "F:\gct\Amphion\models\tts\maskgct\g2p\g2p_generation.py", line 10, in
from models.tts.maskgct.g2p.utils.g2p import phonemizer_g2p
File "F:\gct\Amphion\models\tts\maskgct\g2p\utils\g2p.py", line 30, in
phonemizer_ja = EspeakBackend(
^^^^^^^^^^^^^^
File "F:\gct\Amphion\venv\Lib\site-packages\phonemizer\backend\espeak\espeak.py", line 49, in init
self._espeak.set_voice(language)
File "F:\gct\Amphion\venv\Lib\site-packages\phonemizer\backend\espeak\wrapper.py", line 249, in set_voice
raise RuntimeError( # pragma: nocover
RuntimeError: failed to load voice "ja"

@zelenooki87
Copy link

zelenooki87 commented Nov 3, 2024

Hi, the MaskGCT is built in a Linux environment. For a better coding experience, it is recommended that Linux be used to reproduce.
For people who are having problems configuring the env on a Windows machine, you can try to follow this blog post: https://www.cnblogs.com/v3ucn/p/18511187

still same error even after following everything on this blog ./models/tts/maskgct/g2p\sources\g2p_chinese_model\poly_bert_model.onnx 2024-11-03 08:38:00.9068680 [E:onnxruntime:Default, provider_bridge_ort.cc:1862 onnxruntime::TryGetProviderInfo_CUDA] D:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1539 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "F:\gct\Amphion\venv\Lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"

2024-11-03 08:38:00.9208389 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:993 onnxruntime::python::CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Require cuDNN 9.* and CUDA 12.*, and the latest MSVC runtime. Please install all dependencies as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported. Error: Could not load the specified mbrola voice file. Error: Could not load the specified mbrola voice file. Traceback (most recent call last): File "F:\gct\Amphion\1.py", line 1, in from models.tts.maskgct.maskgct_utils import * ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\gct\Amphion\models\tts\maskgct\maskgct_utils.py", line 20, in from models.tts.maskgct.g2p.g2p_generation import g2p, chn_eng_g2p File "F:\gct\Amphion\models\tts\maskgct\g2p\g2p_generation.py", line 10, in from models.tts.maskgct.g2p.utils.g2p import phonemizer_g2p File "F:\gct\Amphion\models\tts\maskgct\g2p\utils\g2p.py", line 30, in phonemizer_ja = EspeakBackend( ^^^^^^^^^^^^^^ File "F:\gct\Amphion\venv\Lib\site-packages\phonemizer\backend\espeak\espeak.py", line 49, in init self._espeak.set_voice(language) File "F:\gct\Amphion\venv\Lib\site-packages\phonemizer\backend\espeak\wrapper.py", line 249, in set_voice raise RuntimeError( # pragma: nocover RuntimeError: failed to load voice "ja"

You could try this repo. It worked for me correctly on Windows.
https://github.com/justinjohn0306/MaskGCT-Windows
Error message says you dont have Cuda 12.x and Cudnn (+zlib.dll) and msvc build tools in path. If so, you can install onnxruntime (by default). Or if you install all cuda dependencies properly and place it to path variable you could install onnxruntime-gpu for faster inference. Uninstall pytorch and reinstall for GPU. Thats all

@GalenMarek14
Copy link

You could try this repo. It worked for me correctly on Windows. https://github.com/justinjohn0306/MaskGCT-Windows Error message says you dont have Cuda 12.x and Cudnn (+zlib.dll) and msvc build tools in path. If so, you can install onnxruntime (by default). Or if you install all cuda dependencies properly and place it to path variable you could install onnxruntime-gpu for faster inference. Uninstall pytorch and reinstall for GPU. Thats all

How is the quality of your local generations? Did you try to recreate the demo page examples? Mine outputs lower quality than those ones. For instance, the whisper voice comes out as something between a whisper and a low voice:
#334

@zachysaur
Copy link
Author

You could try this repo. It worked for me correctly on Windows. https://github.com/justinjohn0306/MaskGCT-Windows Error message says you dont have Cuda 12.x and Cudnn (+zlib.dll) and msvc build tools in path. If so, you can install onnxruntime (by default). Or if you install all cuda dependencies properly and place it to path variable you could install onnxruntime-gpu for faster inference. Uninstall pytorch and reinstall for GPU. Thats all

How is the quality of your local generations? Did you try to recreate the demo page examples? Mine outputs lower quality than those ones. For instance, the whisper voice comes out as something between a whisper and a low voice: #334

i got everything to pth i have used every possible tool made your repo got some error you need to fix them

@GalenMarek14
Copy link

You could try this repo. It worked for me correctly on Windows. https://github.com/justinjohn0306/MaskGCT-Windows Error message says you dont have Cuda 12.x and Cudnn (+zlib.dll) and msvc build tools in path. If so, you can install onnxruntime (by default). Or if you install all cuda dependencies properly and place it to path variable you could install onnxruntime-gpu for faster inference. Uninstall pytorch and reinstall for GPU. Thats all

How is the quality of your local generations? Did you try to recreate the demo page examples? Mine outputs lower quality than those ones. For instance, the whisper voice comes out as something between a whisper and a low voice: #334

i got everything to pth i have used every possible tool made your repo got some error you need to fix them

I am just an another user. I was asking for their experience as I am also getting lower quality results, though I somehow made it work by zelenooki87 method

@zachysaur
Copy link
Author

You could try this repo. It worked for me correctly on Windows. https://github.com/justinjohn0306/MaskGCT-Windows Error message says you dont have Cuda 12.x and Cudnn (+zlib.dll) and msvc build tools in path. If so, you can install onnxruntime (by default). Or if you install all cuda dependencies properly and place it to path variable you could install onnxruntime-gpu for faster inference. Uninstall pytorch and reinstall for GPU. Thats all

How is the quality of your local generations? Did you try to recreate the demo page examples? Mine outputs lower quality than those ones. For instance, the whisper voice comes out as something between a whisper and a low voice: #334

i got everything to pth i have used every possible tool made your repo got some error you need to fix them

I am just an another user. I was asking for their experience as I am also getting lower quality results, though I somehow made it work by zelenooki87 method

with your experience you telling someone to mess up his whole setting ad remove everything

https://www.youtube.com/@socialapps1194 my youtube channel

@GalenMarek14
Copy link

You could try this repo. It worked for me correctly on Windows. https://github.com/justinjohn0306/MaskGCT-Windows Error message says you dont have Cuda 12.x and Cudnn (+zlib.dll) and msvc build tools in path. If so, you can install onnxruntime (by default). Or if you install all cuda dependencies properly and place it to path variable you could install onnxruntime-gpu for faster inference. Uninstall pytorch and reinstall for GPU. Thats all

How is the quality of your local generations? Did you try to recreate the demo page examples? Mine outputs lower quality than those ones. For instance, the whisper voice comes out as something between a whisper and a low voice: #334

i got everything to pth i have used every possible tool made your repo got some error you need to fix them

I am just an another user. I was asking for their experience as I am also getting lower quality results, though I somehow made it work by zelenooki87 method

with your experience you telling someone to mess up his whole setting ad remove everything

https://www.youtube.com/@socialapps1194 my youtube channel

...Dude, were you sleepy when you were reading my comments? When did I ask you to do anything? zelenooki87 shared his method for working with this on Win 11, and I got it working, but my version outputs somehow wonky results, so I was asking for his observations. I wasn't asking you to do anything; I was just asking him.

I don't know if the models are different or if this method messes something up, but I couldn't reproduce the demo page examples with the same quality.

Also, FYI, you don't need to modify or remove anything. You can just try this in another environment in a separate folder if you're curious. Not that I'm asking you to, though...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants