You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for great work! I was trying to finetune a model using non English datasets (Russian, etc.). The resulting voice is really good, but I keep getting the result with super strong English accent even after long training. Are there any possible ways to reduce the accent (or ideally get rid of it)?
I guess that the problem is because of the fine-tuning process using English model..
The text was updated successfully, but these errors were encountered:
Thank you! I would also mention that for training on Cyrillic letters it is also required to change the english_cleaners to basic_cleaners. I've made a new tokenizer and started training, but the results so far are not good.
Can you please tell how big was your dataset and for how long did you train? I wonder how big should be a dasatet for fine-tuning on a new language.
@andreibezborodov hi, can you help me with start finetuning on another languages? @cherpekat telegram. Cannot connect with you by email in your github profile.
Hello!
Thanks for great work! I was trying to finetune a model using non English datasets (Russian, etc.). The resulting voice is really good, but I keep getting the result with super strong English accent even after long training. Are there any possible ways to reduce the accent (or ideally get rid of it)?
I guess that the problem is because of the fine-tuning process using English model..
The text was updated successfully, but these errors were encountered: