-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
idle GRPC connections in controller #117
Comments
Consider x number of csiaddons node object of the same csidriver, For each operation:
I think the above is enough reason to keep single pool of open connections which can be shared by all operations. |
Keep the connection in memory and remove it after a certain interval of time. no need to keep it open always. |
Any other method will share these disadvantages too |
what are other methods that will have these disadvantages?
This issue is opened to check what is the problem with the current approach and how it can be better. I don't understand the reason for this one, why do you want to utilize resources when not needed?
what do you mean by covering all the requirements/perf metrics? we don't need to scale test for larger numbers but we can check what can cause issues and how to fix them in long run.
can you please elaborate on this one? |
I think it is fine to close idle connections. The CSIAddonsNode state that is kept in the controller (currently with connection alive) can be kept, and have an additional This might improve stability in case of network issues too, where the connection between controller and CSIAddonsNode is interrupted for some unknown time. |
This is needed to fetch capabilities and store it.
I think this can be done. @Madhu-1 wdyt ? |
This looks okay 👍 |
Syncing latest changes from upstream main for kubernetes-csi-addons
As we already know currently, a csiaddons node object is created, we create the connections and keep it until the addons node object is deleted. there could be advantages/disadvantages of this one. As csiaddons is meant to be a generic component and it will be used by multiple csi drivers. Just for an example of 10 nodes cluster and 2 csidrivers are using the csiaddons. We have 20 or 2 connections (for both provisioner and node plugin sidecars are deployed) opened and kept in the in-memory, thinking about the scale what about the 100 nodes clusters or even more csidrivers in a cluster?
Advantages
Disadvantages
I would like to hear thoughts from everyone on this one. cc @nixpanic @humblec @Rakshith-R @pkalever
The text was updated successfully, but these errors were encountered: