Handles LLM calls to locally hosted models (via ollama) or to Perplexity
Text to Image via locally hosted Stable Diffusion
Prompt to Storyboard to go from a prompt to a video after a complex chain of AI calls via LLM, TTS, and text2img
Basically my monster single backend for Discord and Slack bots as well as frontends such as meyer.id (currently offline due to AI safety concerns with uncensored LLM and SD models)
remove "-i", "4" from dockerfile when rolling out db updates.
This project is licensed under Apache 2.0 - see the LICENSE file for details.