Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: generate test script from list of URL's #645

Closed
colde opened this issue May 23, 2018 · 4 comments
Closed

Feature request: generate test script from list of URL's #645

colde opened this issue May 23, 2018 · 4 comments
Labels

Comments

@colde
Copy link

colde commented May 23, 2018

Basically, i wan't to load test based on a some Cloudfront logs i have. As i wan't to validate performance for randomly accessing different URL's work. There is already a converter for .har files, but also supporting either raw Cloudfront/Apache/nginx logs, or simply a text file with a list of URL's would be great.

I've been told that the current looping VU's might not be a good fit for this, but that Issue #550 tracks a better way to execute such "log replay" tests.

@na-- na-- added the feature label May 23, 2018
@na--
Copy link
Member

na-- commented Jan 21, 2021

The arrival-rate executors in k6 v0.27.0 indeed are a better fit for something like this, but still not perfect. The arrival-rate executors usually require constant (or predictably increasing/decreasing) iteration (i.e. request) cadence, whereas log repeats might have no requests for a while, and then a ton of requests simultaneously.

That said, the new executor architecture introduced in k6 v0.27.0 (#1007) allows us to relatively easily add new executor types. And, unless I am mistaken, now it can even be done with an xk6 extension, given that k6 has a clear ExecutorConfig and Executor interface and a way to register new executor types. More info about xk6, albeit focused on "JS" plugins: https://k6.io/blog/extending-k6-with-xk6

@imiric
Copy link
Contributor

imiric commented Jan 21, 2021

I'm not sure if this warrants a new executor type or any type of specific support, built-in or otherwise.

It would be fairly straightforward to load a JSON, CSV or even a newline-separated list of URLs and feed them to k6's http module. Reading Cloudfront/Apache/nginx logs also seems out of scope for k6, assuming those can be transformed externally to some simple format.

@na--
Copy link
Member

na-- commented Jan 25, 2021

@imiric, the new executor type would be more generic than this. We lack an executor you can use to "start new iterations at the pre-determined time offsets [t1, t2, ..., tn]". We don't need to give it a list of URLs and time offsets, just the time offsets, and maybe a repeat count (how many times the executor should cycle through the list of time offsets). It will essentially be a mix of the shared-iterations and arrival-rate executors, with predetermined offsets when each iteration will be executed. We can probably even reuse big parts of the ramping-arrival-rate code, since it listens to a channel for when the next iteration is supposed to be launched: https://github.com/loadimpact/k6/blob/229c3c3ca0bbf8ffe2b1c1ec93c6a10404416d5d/lib/executor/ramping_arrival_rate.go#L425

That's all we need on the executor side. We can have JS helpers that parse various log formats like nginx, Apache, or even HAR files (:exclamation:) and save the actual URLs and parameters into a SharedArray (#1739). No need to hard-code any of that logic in k6, JS helpers would be plenty good enough, and offer us much greater flexibility in covering varying use-cases.

Then we pass only a much smaller array with the time offsets as a parameter to this new executor. The scenario code can use #1320 (or #1539) to match the current scenario iteration number to the actual request details from the big SharedArray to get the request details needs to execute 🎉

And we can't use arrival-rate (or any of the other current executors) because we need an open-model executor that doesn't start iterations at regular intervals. We probably could use an arrival-rate executor by having each iteration sleep() at its start until the time offset for its request comes, but that would be super awkward, would make iteration_duration (and the progressbar VU info) useless, and it would almost make the arrival-rate executor into a closed-model one.

Still, instead of rushing forward with this, maybe a PoC with constant-arrival-rate and sleep() would be a good first step. We kind of need #1320 or #1539 first anyway, to make the most of this executor...

@oleiade
Copy link
Member

oleiade commented Oct 4, 2023

The team discussed this and agreed not to proceed with this feature. As a result, we are closing this issue.

@oleiade oleiade closed this as completed Oct 4, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants