diff --git a/README.md b/README.md index a0a6a49..32bceef 100644 --- a/README.md +++ b/README.md @@ -116,7 +116,7 @@ Logging arguments: After training, you can evaluate the model in terms of lingustic content (WER and CER) and target characteristic (SV). -You need to keep the model arguments in the training phase. the code only supports the version in which the number of target utterances is 1. +You need to keep the model arguments in the training phase. The code only supports the version in which the number of target utterances is 1. ``` python main.py test evaluation arguments: @@ -140,25 +140,9 @@ We provide two versions of models depending on input types (mel, cpc). ## 4. Custom convert -For custom enhancement, you can estimate enhanced speech by running code with ```custom_enhance.py```. -The codes include input data processing (downsample from 48 kHz to 16 kHz). -``` -python custom_enhance.py - -enhance arguments: - --device: Cuda device or CPU - --noisy_path: Path (folder) which contains noisy wav files - --model_name: Model version (small, base, large) - you can use one of them -``` - -## 5. Converted samples - -You can find converted samples in ```./samples``` or please visit our [demo site](https://winddori2002.github.io/vc-demo.github.io/). - -In ```./samples```, they are generated by "TriAAN-VC-CPC". - -For sampels in demo site, they are generated by paper version ("TriAAN-VC-CPC") - +For custom conversion, you can run the code with ```convert.py```. +The codes include data processing, predicting, and vocoding. +You can find converted examples in ```./samples``` or please visit our [demo site](https://winddori2002.github.io/vc-demo.github.io/). # Experimental Results