-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about KeepAlive #1303
Comments
@alexellis Could you please help clarify this auto-scaling problem? |
We find the reason. The OpenFaaS API gateway will receive the requests from the client and then create new connections (keep-alive) with function pods to distribute the requests to pods. After auto-scaling, the gateway will NOT create new connections to the new created pods, leading to unbalanced workload distribution and no performance improvement. We find a workaround to make all requests distributed equally. After auto-scaling, we can terminate all existing connections with gateway and let gateway terminate all connections with function pods. Then quickly, we need to build new connections with gateway and send requests again before the new created pods are terminated for no traffic. Afterwards, gateway will establish connections with all function pods (including new created pods) and distribute requests equally. We think this is a bug and the possible solution is that the gateway should set up new connections with new created pods after auto-scaling. @alexellis |
It's likely that you are only using a single connection / client for this test. When using
|
/add label: question |
I've raised an issue to track this behaviour and to describe the work-arounds. Please have a look at the lab for Linkerd2 - a lightweight proxy which once installed will take over load-balancing and counter the keep-alive settings you are seeing in your testing. https://github.com/openfaas-incubator/openfaas-linkerd2 Your input would also be welcomed on issue 1322. |
/lock: inactivity |
My actions before raising this issue
issue #1271
issue #391
What is the expected behavior of traffic distribution after QPS based auto-scaling with AlertManager?
We found that traffic is not distributed equally to every function pod after auto-scaling with AlertManager (QPS).
Expected Behaviour
After auto-scaling, we expect the traffic of requests to be distributed to every function pod equally.
Current Behaviour
We deploy a function (that will fetch a web page from a local server) and set the autoscaling rules (min:1, max: 10, factor: 10). Then we use wrk2 to keep sending the requests for that function at a rate of 15 requests/second. After auto-scaling, there are five new created pods, but all the traffic go to the first function pod and other function pods don't receive any request. We thought the request traffic should go to every pod of this function equally. Is there some bugs in the traffic distribution?
Steps to Reproduce (for bugs)
set the parameters of function as follows:
com.openfaas.scale.min: "1"
com.openfaas.scale.max: "10"
com.openfaas.scale.factor: "10"
com.openfaas.scale.zero: false
deploy the function and do these two things at the same time:
(1) use
wrk2
to send request to the function, and the command we use is:(2) use
tcpdump
to capture the data on the worker node where the gateway is running.Context
We are trying quantifying the performance of auto-scaling of openfaas.
Your Environment
FaaS-CLI version ( Full output from:
faas-cli version
):0.9.2
Docker version
docker version
(e.g. Docker 17.0.05 ):18.09.2
Are you using Docker Swarm or Kubernetes (FaaS-netes)?
Kubernetes (FaaS-netes)
Operating System and version (e.g. Linux, Windows, MacOS):
Linux
Code example or link to GitHub repo or gist to reproduce problem:
Just a simple function to request a website page from a local web server.
Other diagnostic information / logs from troubleshooting guide
Next steps
You may join Slack for community support.
The text was updated successfully, but these errors were encountered: