Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Response handling on the client is very slow comparing to server #1189

Open
nazar-pc opened this issue Aug 18, 2023 · 11 comments
Open

Response handling on the client is very slow comparing to server #1189

nazar-pc opened this issue Aug 18, 2023 · 11 comments

Comments

@nazar-pc
Copy link

There are two apps: one server and one client.

Client makes 1000 requests asynchronously to the server, server replies to each of them with 2MiB response (1MiB in hex).

Result: server (Substrate node actually) is done preparing responses in under 5 seconds (debug build) and has near zero CPU usage after this, client is busy for ~70 seconds with ~130% CPU usage (so using just over 1 CPU core).

Test is running on 24C/32T 13900K and tokio runtime used is multi-threaded, I have no idea what the client is doing for so long occupying just one core and why this affects client, but not server.

@niklasad1
Copy link
Member

Hey @nazar-pc are you using the http client or ws client?

Then also report which jsonrpsee version you are running....

@nazar-pc
Copy link
Author

nazar-pc commented Aug 18, 2023

WS client, version is unfortunately still 0.16.2 (what Substrate is using)

@niklasad1
Copy link
Member

niklasad1 commented Aug 18, 2023

Ok I see, we merged #1145 recently included in v0.20.

I hope that makes things better because we refactored the background task to multiplex send/receive operations maybe you could test it by patching just the client if it's not to annoying.

@nazar-pc
Copy link
Author

Hm.. simply swapping the version with git didn't quite work, it failed to compile, probably some version needs to be bumped somewhere or there is semver violation somewhere:

error[E0599]: no method named `add_trust_anchors` found for struct `RootCertStore` in the current scope
   --> /home/nazar-pc/.cargo/git/checkouts/jsonrpsee-5ae67701a3cedc92/1af525c/client/transport/src/ws/mod.rs:547:10
    |
547 |             roots.add_trust_anchors(webpki_roots::TLS_SERVER_ROOTS.iter().map(|ta| {
    |             ------^^^^^^^^^^^^^^^^^ help: there is a method with a similar name: `add_server_trust_anchors`

@niklasad1
Copy link
Member

ah ok, maybe try updating tokio_rustls but not sure whether it works you have version 0.16.2 in your tree.

@nazar-pc
Copy link
Author

nazar-pc commented Aug 18, 2023

Semver exists such that it shouldn't break, especially I wouldn't expect tokio to break it 😕

UPD: Nope, didn't work.

@nazar-pc
Copy link
Author

I see, the reason is add_trust_anchor was only added in 0.21.6 of rustls, but jsonrpsee requires only 0.21 and I had 0.21.2, which satisfied the requirement, but didn't have the method. I think you want to enforce this requirement in jsonrpsee-client-transport (for instance by importing rustls directly) or use deprecated method until upgrade to 0.22.x.

@nazar-pc
Copy link
Author

Tested current git of jsonrpsee (1af525c), it used 120% of CPU and finished in 74 seconds. I suspect it might take a while to deserialize 2GiB of JSON, but even then it should have ideally used more than one core doing so.

Release build of the client took 42 seconds. Still a lot.

@niklasad1
Copy link
Member

Ok, that makes sense and deserializing the response is single-threaded and is not split across different tasks/cores.

It's really intended for async I/O but perhaps we could do something better is message is larger than some limit

@nazar-pc
Copy link
Author

Yeah, that was my thought as well. JSON messages that are 2 MiB in size are not very typical for JSON-RPC.

@nazar-pc
Copy link
Author

On this topic making requests from tokio::task helped a lot with offloading compute, though serde_json::Value is still created in async context, I'm wondering if that should be done by default.

Similarly I see in batch_request responses will be deserialized sequentially, which is suboptimal from performance point of view.

What I don't know is how much concurrency is applied on the server side. Either way current versions of jsonrpsee are WAY BETTER than 0.16.2 I used originally.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants