-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Threshold improvements #1136
Comments
Another thing gleaned from @cajames's excellent k6 presentation (specifically, this question at the end). Somewhat related to to the improvements above, users might interested in setting thresholds only for a specific period of the test - for example, when they simulate a sharply spiked load on their system. As mentioned in the demo, this can be achieved with custom metrics, though I think it may be somewhat hacky and tedious to set up that way. It might be slightly easier after #1007 is merged, though it's still going to be tricky. But since we're looking to more deeply introduce the concept of time in thresholds, this is a valid use case we probably should support when we enhance the thresholds. That way there wouldn't be any need for custom metrics. Another approach would be to to tag metrics coming from different executors (after #1007) or export let options = {
thresholds: {
"http_req_duration{stage:someStageID}": ["p(99)<250"],
},
// ... This is probably the more robust approach in general, and easier to implement for us, but I can see some use cases where the time-based threshold configuration would be preferred, so we should probably do both. |
As it was pointed out in #1267 (comment), another improvement we can do (and this can probably be done even before we implement #961 or #763) is to calculate the thresholds on more than one core. Depending on how we implement the things above and in #763, it might make sense to have a single goroutine be responsible for all thresholds in a single metric? |
A related case appeared in the forum here: thresholds there are used only as a workaround to get submetrics but the question was about how to get submetric rate for scenario duration and not for duration of the whole test. Scenarios in this case are a kind of substitute for time period in the test so it might make sense to consider that as option as well. |
Closing in favor of #2379 as it is a duplicate. It already links this issue. |
Currently, you cannot specify a threshold for something like "requests per second", "errors per minute", or "values per period" in general.
Somewhat related to the above, threshold evaluations are made based only on all of the metrics gathered during the whole duration of the test. Or, more precisely (for when you have
abortOnFail
), based on the metrics gathered since the test has started up until a certain moment. That evaluation can be delayed by thedelayAbortEval
option, but there's no way to specify a time window of metrics to evaluate, like "p(99)
of request duration for the last 30 minutes should be < X" or something like that.Both of these things would be complicated to implement and need very careful evaluation if we should even do them. The main reason for not doing something like this, besides the complexity, is because don't want the required data crunching to affect the test execution and skew the results. We also don't want to explode the k6 memory usage further than it currently is (#1068). So these things are most likely prerequisites to any such effort: #961, #763
The text was updated successfully, but these errors were encountered: