Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handle NA values during HPO #84

Open
borauyar opened this issue Jul 10, 2024 · 0 comments
Open

Handle NA values during HPO #84

borauyar opened this issue Jul 10, 2024 · 0 comments

Comments

@borauyar
Copy link
Member

During hyperparameter optimisation, some parameter setup may lead to exploding/vanishing gradients that yield NA loss values. This kills the training, but it should be allowed to continue the next hpo iteration with a different parameter combination. In such cases, setting the loss value to a high number should solve the issue, so the optimization avoids that parameter combination and training continues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant