Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quick LPCNet update #181

Open
jmvalin opened this issue Feb 24, 2022 · 6 comments
Open

Quick LPCNet update #181

jmvalin opened this issue Feb 24, 2022 · 6 comments

Comments

@jmvalin
Copy link
Member

jmvalin commented Feb 24, 2022

Just a few updates around LPCNet. The repo has been transferred from the mozilla organization to xiph. Everything should just keep working (let us know if it doesn't) as usual.

Recently there have been many changes to the code. One set of changes is meant to make the synthesis even more efficient, with an equivalent reduction of around 3.5x in CPU for the same quality. The efficiency improvements are documented in a paper that was recently accepted for ICASSP 2022:

A second set of changes makes the LPC part fully-differentiable and should make it possible to train LPCNet end-to-end with any set of input features. The changes (which are optional at training time) are documented in the following pre-print:

@stayforapple
Copy link

Could you please develop an open-source ASIC (Application Specific Integrated Circuit) just for LPCNet ? Such that the synthesis can run in a cheap IC, which is good for the popularity of LPCNet.

@stayforapple
Copy link

And, how is the quality of the 1.6kbps vocoder with your new LPCNet ?

@acsuwut
Copy link

acsuwut commented Apr 6, 2022

Hey @jmvalin -

Are the changes you're referring to in the start of this thread in the repository now?

Also, is there any formal roadmap for plans you have for this repository?

Thanks!

@jmvalin
Copy link
Member Author

jmvalin commented Apr 6, 2022

Yes, all of the changes I mentioned are currently in master. There is no formal roadmap at this point.

@acsuwut
Copy link

acsuwut commented Apr 6, 2022

Yes, all of the changes I mentioned are currently in master. There is no formal roadmap at this point.

Awesome, thank you for the quick response.

@SolomidHero
Copy link

Hi @jmvalin !
Is it correct, that by default any new trained model uses 1)efficiency improvement and 2)differential LPC computation?
Also what about models listed there https://media.xiph.org/lpcnet/data/ ?
Do they use above improvements if listed after April 2022?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants