A frame2frame, video2video Video Editor based on the stable-diffusion
- nateraw / stable-diffusion-videos : 4.1k
- Picsart-AI-Research / Text2Video-Zero : 3.7k
- lucidrains / video-diffusion-pytorch : 1.1k
- Make-A-Video: Site:MetaAI and SearchPaper: Cornell University / arXiv:2209.14792
- showlab / Tune-A-Video : 4k
- omerbt / TokenFlow : 1.4k
- rese1f / StableVideo : 1.3k
- ChenyangQiQi / FateZero : 1k
- HumanAIGC / AnimateAnyone : 13.5k and AcadamicPaper : Cornell University / arXiv:2311.17117 ( Paper and Code: moorethreads / moore-animateanyone : 2.3k )
- AILab-CVC / VideoCrafter : 3.8k
- ali-vilab / i2vgen-xl : 2.3k
- leandromoreira/digital_video_introduction: There are more theoretical studies on video
- gradio docs
- TkInter and CustomTKinter and CustomTkinter docs
- ParthJadhav / Tkinter-Designer
- PyQtGraph
- 如果二次开发或者部署过程中有什么问题,可以随时联系我们。
- QQ邮箱:[email protected]