-
Notifications
You must be signed in to change notification settings - Fork 163
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VRAM increasing slightly every step until I run out. #86
Comments
I'm unable to reproduce using your shared script and the latest commit. Make sure your repo is up to date as the fix for this just got merged yesterday. Disabling system memory fallback (I believe this is something you can do in NVIDIA control panel, though I don't use windows) will also fix this, as the gpu will recognize its time to deallocate when it reaches the max |
Okay thank you! |
I think that was my issue. I disabled the sysmem fallback and it seems to be helping. |
nevermind. Now it says:
|
I dont understand why it is doing this still |
hmm. Do you have the temp, res, or something else set super high? It shouldn't be allocating 14 gb with the script you provided in the original post |
I tried app.py as well and it got to step 17 and crashed (OOM). I have not modified anything. |
I set the steps to 31 because I want a 10 second video, but everything else is the default. |
I get a crash after step 6. Radeon 7900XTX (24 GB VRAM). 80 GB system RAM.
|
i've got the same situation as reported above: memory usage goes up and down in line with the gc collecting and cuda cache cleaning, but on every step it goes a bit higher until OOM at step 17 (so it's not a VAE issue). |
And the vram starts off low but gradually increases every step until it hits the limit. I have 24 GB of vram.
The text was updated successfully, but these errors were encountered: