Skip to content

Latest commit

 

History

History

eval

Evaluation

Install packages for evaluation:

pip install -e .[eval]

Generating Samples for Evaluation

Prepare Test Datasets

  1. Seed-TTS testset: Download from seed-tts-eval.
  2. LibriSpeech test-clean: Download from OpenSLR.
  3. Unzip the downloaded datasets and place them in the data/ directory.
  4. Update the path for LibriSpeech test-clean data in src/f5_tts/eval/eval_infer_batch.py
  5. Our filtered LibriSpeech-PC 4-10s subset: data/librispeech_pc_test_clean_cross_sentence.lst

Batch Inference for Test Set

To run batch inference for evaluations, execute the following commands:

# batch inference for evaluations
accelerate config  # if not set before
bash src/f5_tts/eval/eval_infer_batch.sh

Objective Evaluation on Generated Results

Download Evaluation Model Checkpoints

  1. Chinese ASR Model: Paraformer-zh
  2. English ASR Model: Faster-Whisper
  3. WavLM Model: Download from Google Drive.

Then update in the following scripts with the paths you put evaluation model ckpts to.

Objective Evaluation

Update the path with your batch-inferenced results, and carry out WER / SIM evaluations:

# Evaluation for Seed-TTS test set
python src/f5_tts/eval/eval_seedtts_testset.py --gen_wav_dir <GEN_WAVE_DIR>

# Evaluation for LibriSpeech-PC test-clean (cross-sentence)
python src/f5_tts/eval/eval_librispeech_test_clean.py --gen_wav_dir <GEN_WAVE_DIR> --librispeech_test_clean_path <TEST_CLEAN_PATH>