bogihsu / tacotron2-pytorch Goto Github PK
View Code? Open in Web Editor NEWYet another PyTorch implementation of Tacotron 2 with reduction factor and faster training speed.
License: MIT License
Yet another PyTorch implementation of Tacotron 2 with reduction factor and faster training speed.
License: MIT License
Hi, thanks a lot for this repo.
I tried this code on blizard challenge 2011 data with wavernn as decoder. The alignment is quite good, but the pitch of synthesized speech is wierd and so does the duration. Sometimes, synthesized speech is faster than groud truth. I tried copy synthesis with wavernn decoder, the speech is quite good. I don't know why this happens. Could you give some advice? Attached please find the aligments, synthesized speech and groud truth samples.
Thanks in advance.
When i train NVIDIA/tacotron2 model, i always get "End with low power." or "Reached max decoder steps".I train your code, the problem is gone.What did you do to solve the "Low power" and "Reached max decoder steps"?
Hi! This is not really an issue, but I've got a question about the tensors mel_outputs and mel_outputs_postnet in the save_mel() function in inference.py.
When running inference I get a tensor of shape [(1, 80, 253)], for example. When I pass this tensor along to my working WaveGlow model (from this project), it takes the mel spectrograms and converts them to audio, but it's just static.
The provided mel tensors from the WaveGlow people have differently scaled magnitude values. Here is a description of a sample vector from one of their mel tensors:
DescribeResult(nobs=760, minmax=(-9.1658077, -4.7563148), mean=-6.8498592, variance=0.56033307, skewness=-0.34983542561531067, kurtosis=-0.3708913831845826)
Here is a description of a sample vector from one of your mel tensors after I've run inference:
DescribeResult(nobs=282, minmax=(0.042787716, 0.4756053), mean=0.22464047, variance=0.0148157105, skewness=0.2994621992111206, kurtosis=-0.9376598291505811)
The number of nobs correlates to the number of milliseconds (I think?) so it shouldn't matter how many there are. The scale for the minmax tuple is off. Have you thought about how to scale this so that the two networks could work together? Any input you can provide would be very helpful.
when reduction equal 3.
Have you tried the performance with reduction factor > 1? Do you have some samples?
Can I use waveglow, particularly the pretrained one from https://github.com/NVIDIA/waveglow for inference?
Or are some changes required?
Thanks
Hi. In README.md you suggest three ways to run the training:
The instructions for 2. and 3. are identical. I think that you need to update suggestion 3. to show which arguments to apply to allow tensorboard usage.
Line 120 in model.py
Should set training=self.training, otherwise in the evaluation, dropout function is still working.
Hello @BogiHsu , today i'm reading that paper "WG-WaveNet: Real-Time High-Fidelity Speech Synthesis without GPU" and found
your implemented WG-WaveNet at https://github.com/BogiHsu/WG-WaveNet
i'm very excited and trying to training WG-WaveNet network.
But ldidn't found any source code or solutions for connect Tacotron2 with WG-WaveNet.
In this repo, i can reference demo Vocoder for WaveGlow and Hifi-GAN. but not WG-WaveNet.
Could you add more demo for WG-WaveNet ?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.