-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for resume training from network pkl in run_training #6
base: master
Are you sure you want to change the base?
Conversation
If this is what I think it is, I wish the pull request was accepted. However, I think the reason why it might not be accepted is that the training schedule and the reporting are affected by two other variables, which should be provided by the user when the training is resumed: resume_pkl = None, # Network pickle to resume training from, None = train from scratch.
resume_kimg = 0.0, # Assumed training progress at the beginning. Affects reporting and training schedule.
resume_time = 0.0, # Assumed wallclock time at the beginning. Affects reporting. Reference: here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great. This is exactly what I missed.
Hi, sharing my Colab setting which i am following now: Second issue is i have loaded lastet pickle file of stylegan model into stylegan2 model with transfer learning with after execution it is not saving the results.I have meade following changes in training_loop.py file. #load the latest pickle file generated from stylegan model: #To save the output after every epcoh , here i made changes : Kindly revert back for this two issues. |
CUDA_ERROR_OUT_OF_MEMORY |
@ahmedshingaly
and I believe a 2080 Ti has 12 GB of memory. |
you are right, I am using a custom dataset, and still error persists. will try to run on google collab and see if it gives a different result |
No description provided.