Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about KeepAlive #1303

Closed
3 tasks done
hotjunfeng opened this issue Sep 4, 2019 · 6 comments
Closed
3 tasks done

Question about KeepAlive #1303

hotjunfeng opened this issue Sep 4, 2019 · 6 comments
Labels

Comments

@hotjunfeng
Copy link

hotjunfeng commented Sep 4, 2019

My actions before raising this issue

What is the expected behavior of traffic distribution after QPS based auto-scaling with AlertManager?

We found that traffic is not distributed equally to every function pod after auto-scaling with AlertManager (QPS).

Expected Behaviour

After auto-scaling, we expect the traffic of requests to be distributed to every function pod equally.

Current Behaviour

We deploy a function (that will fetch a web page from a local server) and set the autoscaling rules (min:1, max: 10, factor: 10). Then we use wrk2 to keep sending the requests for that function at a rate of 15 requests/second. After auto-scaling, there are five new created pods, but all the traffic go to the first function pod and other function pods don't receive any request. We thought the request traffic should go to every pod of this function equally. Is there some bugs in the traffic distribution?

Steps to Reproduce (for bugs)

  1. set the parameters of function as follows:
    com.openfaas.scale.min: "1"
    com.openfaas.scale.max: "10"
    com.openfaas.scale.factor: "10"
    com.openfaas.scale.zero: false

  2. deploy the function and do these two things at the same time:

(1) use wrk2 to send request to the function, and the command we use is:

$ wrk2 -d 200 -c 15 -t 15 -R 15 --latency --timeout 30 [function_url]

(2) use tcpdump to capture the data on the worker node where the gateway is running.

  1. analyze the traffic distribution across different function pods.

Context

We are trying quantifying the performance of auto-scaling of openfaas.

Your Environment

  • FaaS-CLI version ( Full output from: faas-cli version ):
    0.9.2

  • Docker version docker version (e.g. Docker 17.0.05 ):
    18.09.2

  • Are you using Docker Swarm or Kubernetes (FaaS-netes)?
    Kubernetes (FaaS-netes)

  • Operating System and version (e.g. Linux, Windows, MacOS):
    Linux

  • Code example or link to GitHub repo or gist to reproduce problem:
    Just a simple function to request a website page from a local web server.

  • Other diagnostic information / logs from troubleshooting guide

Next steps

You may join Slack for community support.

@hotjunfeng
Copy link
Author

@alexellis Could you please help clarify this auto-scaling problem?

@hotjunfeng
Copy link
Author

We find the reason. The OpenFaaS API gateway will receive the requests from the client and then create new connections (keep-alive) with function pods to distribute the requests to pods. After auto-scaling, the gateway will NOT create new connections to the new created pods, leading to unbalanced workload distribution and no performance improvement.

We find a workaround to make all requests distributed equally. After auto-scaling, we can terminate all existing connections with gateway and let gateway terminate all connections with function pods. Then quickly, we need to build new connections with gateway and send requests again before the new created pods are terminated for no traffic. Afterwards, gateway will establish connections with all function pods (including new created pods) and distribute requests equally.

We think this is a bug and the possible solution is that the gateway should set up new connections with new created pods after auto-scaling. @alexellis

@hotjunfeng hotjunfeng changed the title Question about traffic distribution across function pods after auto-scaling with AlertManager (QPS) [BUG]unequal traffic distribution across function pods after QPS-based auto-scaling Sep 20, 2019
@hotjunfeng hotjunfeng changed the title [BUG]unequal traffic distribution across function pods after QPS-based auto-scaling [BUG] Unequal traffic distribution across function pods after QPS-based auto-scaling Sep 20, 2019
@alexellis alexellis changed the title [BUG] Unequal traffic distribution across function pods after QPS-based auto-scaling Question about KeepAlive Sep 27, 2019
@alexellis
Copy link
Member

It's likely that you are only using a single connection / client for this test. When using hey with multiple connections, we do observe distribution across replicas.

  • If you disable KeepAlive in your client - i.e. hey, the performance will be worse, but there are less changes that the connection will be reused for a client
  • If you add > 1 client, this usually load-balances even with keepalive
  • By adding Linkerd as a service-mesh, the KeepAlive settings are ignored and distributed more fairly

@alexellis
Copy link
Member

/add label: question

@alexellis
Copy link
Member

I've raised an issue to track this behaviour and to describe the work-arounds.

Please have a look at the lab for Linkerd2 - a lightweight proxy which once installed will take over load-balancing and counter the keep-alive settings you are seeing in your testing.

https://github.com/openfaas-incubator/openfaas-linkerd2

Your input would also be welcomed on issue 1322.

@alexellis
Copy link
Member

/lock: inactivity

@derek derek bot locked and limited conversation to collaborators Feb 24, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

2 participants