Unclear documentation about limit_concurrency and backlog settings #1817
Replies: 18 comments 16 replies
-
PR is always welcome to improve documentation... |
Beta Was this translation helpful? Give feedback.
-
@Kludex those are my questions though, once I understand the answers I'm happy to improve the documentation |
Beta Was this translation helpful? Give feedback.
-
I'm trying to figure out what the backlog parameter does as well. It looks like it ultimately goes to the The original PR doesn't seem to clarify what's going on: https://github.com/encode/uvicorn/pull/545/files In playing around on Ubuntu 20.04:
I think ultimately, the Given this, I don't have a good idea when anyone would ever use |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
To answer this rephrased question: "when limit_concurrency is reached, do all extra requests get 503 immediately" Answer: No. You first have to wait for one of the workers to finish its last request, it will then drop all requests. Per worker. That's because each worker is running on a single thread, so it can't go and tell some incoming request to drop. You would hope that there would be some OS-layer tool that would do that dropping for you (like |
Beta Was this translation helpful? Give feedback.
-
been a while but iirc there's 0 link between the |
Beta Was this translation helpful? Give feedback.
-
@euri10 I'm curious what part of that answer is additionally helpful here? That answer states "backlog is passed down the loop.create_server and ultimately will determine the number of sockets listening.", which I believe is incorrect. There's only one socket listening, with a "limit-concurrency is just here to tell, after x responses issue a 503" I think is not very clear. My description above seems to be more detailed and specific. Maybe you understand something I do not, but from what I can glean you're just adding that answer in case it's helpful, not because you have confidence it's helpful. Which is a fine thing, I just wanted to check. |
Beta Was this translation helpful? Give feedback.
-
I didnt read your answer, I was answering OP ;) |
Beta Was this translation helpful? Give feedback.
-
@euri10 that stackoverflow link simply states there is no link with a wrong definition of limit_concurrency and no explanation to the dynamic between limit_concurrency and backlog. If you are confident that there is no link, then please explain more exactly why there is no relationship by elaborating the answers to my questions. |
Beta Was this translation helpful? Give feedback.
-
@Kludex why is this issue converted to a discussion? There is a clear confusion to the behavior of Uvicorn's two settings. This is an issue for people troubleshooting timeouts if they can't find out what the settings are exactly doing. |
Beta Was this translation helpful? Give feedback.
-
After reading the source code further, I think there is actually a bug. |
Beta Was this translation helpful? Give feedback.
-
combine that with @siminchengithub's answer, it looks like |
Beta Was this translation helpful? Give feedback.
-
Going to hijack this discussion thread here because I'm still very confused about how these 2 flags work both after the answers to my stackoverflow question that was linked here, and with the responses in this thread. Concrete example: I can see at least 2 possible interpretations of how uvicorn behaves here from the documentation:
I suppose I can setup a test to get this result, but this really should be clear from the documentation. |
Beta Was this translation helpful? Give feedback.
-
It's also worth noting that when you ask ChatGPT, its interpretation of how uvicron+gunicorn works is that it behaves like option 1 in my post above, but this thread is making me thing that option 2 is actually correct. This is just further evidence that a natural language interpretation of the existing documentation and online material is misleading. Here's some output text:
|
Beta Was this translation helpful? Give feedback.
-
Did an Experiment and found no use for backlog basically Also that limit_concurrency has bug in implementation. when you set it to 10 it will take only 9 requests at a time. |
Beta Was this translation helpful? Give feedback.
-
Is there any conclusions? |
Beta Was this translation helpful? Give feedback.
-
tl;dr
Findings
MREserver.pyimport fastapi, logging, asyncio, httpx
logging.basicConfig(level=logging.INFO, format="%(asctime)s:%(levelname)-7s %(filename)20s:%(lineno)-4d %(name)s:%(message)s")
app = fastapi.FastAPI()
@app.get("/")
async def read_root():
await asyncio.sleep(2)
return {"Hello": "World"}
async def bomb_single_connection(requests=32):
"""Send concurrent requests using HTTP connection pooling."""
limits = httpx.Limits(max_keepalive_connections=0, max_connections=1) # everything over 1 connection
transport = httpx.AsyncHTTPTransport(retries=0) # no retries on failed connections (to test --backlog)
timeout = httpx.Timeout(None) # no timeouts (also no timeout for waiting for a slot in the connection pool)
async with httpx.AsyncClient(limits=limits, timeout=timeout, transport=transport) as client:
# send all requests concurrently over the same connection
await asyncio.gather(*[client.get('http://localhost:8001') for _ in range(requests)])
async def bomb_separate_connections(requests=32):
"""Send concurrent requests, each in their own connection."""
await asyncio.gather(*[bomb_single_connection(requests=1) for _ in range(requests)]) terminal 1$ uvicorn server:app --port 8001 --host 0.0.0.0 --backlog 0 --limit-concurrency 32
INFO: Started server process [37996]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8001 (Press CTRL+C to quit)
WARNING: Exceeded concurrency limit.
INFO: 127.0.0.1:55433 - "GET / HTTP/1.1" 503 Service Unavailable
INFO: 127.0.0.1:55379 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55381 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55383 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55384 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55387 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55389 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55390 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55392 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55394 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55396 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55398 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55400 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55403 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55404 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55406 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55408 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55411 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55412 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55414 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55416 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55418 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55420 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55422 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55425 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55426 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55428 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55430 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55434 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55436 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55437 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55438 - "GET / HTTP/1.1" 200 OK
WARNING: Exceeded concurrency limit.
INFO: 127.0.0.1:55523 - "GET / HTTP/1.1" 503 Service Unavailable
WARNING: Exceeded concurrency limit.
WARNING: Exceeded concurrency limit.
INFO: 127.0.0.1:55524 - "GET / HTTP/1.1" 503 Service Unavailable
INFO: 127.0.0.1:55525 - "GET / HTTP/1.1" 503 Service Unavailable
INFO: 127.0.0.1:55526 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55527 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55528 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55529 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55541 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55542 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55543 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55544 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55545 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55546 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55547 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55548 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55549 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55550 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55551 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55552 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55553 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55554 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55555 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55556 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55557 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55558 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55559 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55560 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55561 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55562 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55563 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55564 - "GET / HTTP/1.1" 200 OK
INFO: 127.0.0.1:55565 - "GET / HTTP/1.1" 200 OK terminal 2In [1]: %load_ext autotime # https://pypi.org/project/ipython-autotime/
time: 359 µs (started: 2024-02-06 10:11:53 +01:00)
In [2]: from server import bomb_single_connection, bomb_separate_connections
time: 12.7 ms (started: 2024-02-06 10:12:02 +01:00)
In [3]: await bomb_single_connection(requests=32)
2024-02-06 10:13:21,781:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 503 Service Unavailable"
2024-02-06 10:13:23,723:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:13:23,724:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:13:23,725:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:13:23,726:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:13:23,727:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:13:23,728:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:13:23,732:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:13:23,733:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:13:23,734:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:13:23,739:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:13:23,740:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:13:23,741:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:13:23,745:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:13:23,748:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:13:23,750:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:13:23,751:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:13:23,755:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:13:23,758:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:13:23,760:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:13:23,764:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:13:23,770:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:13:23,771:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:13:23,776:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:13:23,777:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:13:23,779:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:13:23,780:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:13:23,781:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:13:23,784:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:13:23,785:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:13:23,785:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:13:23,786:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
time: 2.12 s (started: 2024-02-06 10:13:21 +01:00)
In [4]: await bomb_separate_connections(requests=32)
2024-02-06 10:16:26,863:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 503 Service Unavailable"
2024-02-06 10:16:26,865:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 503 Service Unavailable"
2024-02-06 10:16:26,865:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 503 Service Unavailable"
2024-02-06 10:16:28,864:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:16:28,865:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:16:28,866:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:16:28,867:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:16:28,868:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:16:28,868:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:16:28,869:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:16:28,869:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:16:28,872:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:16:28,872:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:16:28,873:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:16:28,873:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:16:28,874:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:16:28,874:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:16:28,875:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:16:28,875:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:16:28,876:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:16:28,876:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:16:28,879:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:16:28,880:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:16:28,880:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:16:28,881:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:16:28,881:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:16:28,882:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:16:28,882:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:16:28,883:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:16:28,883:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:16:28,884:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
2024-02-06 10:16:28,884:INFO _client.py:1729 httpx:HTTP Request: GET http://localhost:8001 "HTTP/1.1 200 OK"
time: 2.53 s (started: 2024-02-06 10:16:26 +01:00) (empty lines added for readability) |
Beta Was this translation helpful? Give feedback.
-
After two years this is still open, right? Just tested it. |
Beta Was this translation helpful? Give feedback.
-
According to the doc, limit_concurrency is the maximum number of concurrent connections and any extra connection requests will get 503; backlog is the maximum number of connections in the backlog waiting to be handled.
My confusion about them:
If someone can also describe more clearly how Uvicorn interacts with a socket and its backlog in relation to its limit_concurrency setting would be great!
Beta Was this translation helpful? Give feedback.
All reactions