Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide an extension point for prediction "pruning" logic #115

Open
pirj opened this issue Jul 11, 2019 · 0 comments
Open

Provide an extension point for prediction "pruning" logic #115

pirj opened this issue Jul 11, 2019 · 0 comments
Labels
feature request an issue that asks for a feature which is not supported yet help wanted

Comments

@pirj
Copy link
Contributor

pirj commented Jul 11, 2019

Currently, we have no weights logic at all.
We pick the first X examples to run in case prediction size is bigger than the specified limit.

Sometimes it leads to predictions with critical tests ignored. We need to extract prediction "pruning" logic to separate class so we alter behaviour with different strategies.

Future expected strategies:

  • First tests
  • Random tests
  • Tests with weights

Expected outcome

A class that encapsulates current logic ('first tests') that can be easily substituted with a different class with different logic.

@pirj pirj added feature request an issue that asks for a feature which is not supported yet help wanted labels Jul 11, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request an issue that asks for a feature which is not supported yet help wanted
Projects
None yet
Development

No branches or pull requests

1 participant