Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

allow restricting Kube metadata to local node only #1440

Open
wants to merge 8 commits into
base: main
Choose a base branch
from

Conversation

mariomac
Copy link
Contributor

@mariomac mariomac commented Dec 10, 2024

Adds the BEYLA_KUBE_META_RESTRICT_LOCAL_NODE configuration option that allows configuring the local informer to only watch the Kubernetes Pods from the local node. This will alleviate the memory load, especially during startup.

Copy link

codecov bot commented Dec 10, 2024

Codecov Report

Attention: Patch coverage is 1.31579% with 75 lines in your changes missing coverage. Please review.

Project coverage is 63.22%. Comparing base (0a0bb6f) to head (59cdfef).
Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
pkg/kubecache/meta/informers_init.go 0.00% 64 Missing ⚠️
pkg/internal/kube/informer_provider.go 0.00% 10 Missing and 1 partial ⚠️

❗ There is a different number of reports uploaded between BASE (0a0bb6f) and HEAD (59cdfef). Click for more details.

HEAD has 2 uploads less than BASE
Flag BASE (0a0bb6f) HEAD (59cdfef)
unittests 1 0
k8s-integration-test 1 0
Additional details and impacted files
@@             Coverage Diff             @@
##             main    #1440       +/-   ##
===========================================
- Coverage   80.97%   63.22%   -17.76%     
===========================================
  Files         149      145        -4     
  Lines       15255    15140      -115     
===========================================
- Hits        12353     9572     -2781     
- Misses       2293     4895     +2602     
- Partials      609      673       +64     
Flag Coverage Δ
integration-test 59.70% <1.31%> (+0.19%) ⬆️
k8s-integration-test ?
oats-test 33.79% <1.31%> (-0.12%) ⬇️
unittests ?

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

Hello @mariomac!
Backport pull requests need to be either:

  • Pull requests which address bugs,
  • Urgent fixes which need product approval, in order to get merged,
  • Docs changes.

Please, if the current pull request addresses a bug fix, label it with the type/bug label.
If it already has the product approval, please add the product-approved label. For docs changes, please add the type/docs label.
If the pull request modifies CI behaviour, please add the type/ci label.
If none of the above applies, please consider removing the backport label and target the next major/minor release.
Thanks!

Copy link
Contributor

This PR must be merged before a backport PR will be created.

Copy link
Contributor

@grcevski grcevski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! I think the test failure might be related to aggressive expiration of metrics, or maybe confusion of what the name should be. I see an expiration of these values, where we have testserver-....

2024-12-12T16:40:35.912684579Z stdout F time=2024-12-12T16:40:35.912Z level=DEBUG msg="storing new metric label set" component=otel.Expirer type=*metric.int64Inst labelValues="[172.18.0.2 43558 request 10.244.0.5 10.244.0.0/16 testserver-858cdf668b-xpnlx 8080  egress my-kube testserver-858cdf668b-xpnlx default 172.18.0.2 test-kind-cluster-netolly-control-plane testserver Deployment Pod internal-pinger-net default 172.18.0.2 test-kind-cluster-netolly-control-plane internal-pinger-net Pod Pod 8080 10.244.0.9 10.244.0.0/16 internal-pinger-net TCP]"

However the next label is recorded with testserver as a name, i.e. no dash something:

2024-12-12T16:41:02.912577238Z stdout F time=2024-12-12T16:41:02.912Z level=DEBUG msg="storing new metric label set" component=otel.Expirer type=*metric.int64Inst labelValues="[172.18.0.2 43558 request 10.96.94.38 10.96.0.0/16 testserver 8080  egress my-kube testserver default   testserver Service Service internal-pinger-net default 172.18.0.2 test-kind-cluster-netolly-control-plane internal-pinger-net Pod Pod 8080 10.244.0.9 10.244.0.0/16 internal-pinger-net TCP]"

All subsequent labels are without the dash...

@@ -470,6 +521,10 @@ func (inf *Informers) ipInfoEventHandler(ctx context.Context) *cache.ResourceEve
return &cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
metrics.InformerNew()
em := obj.(*indexableEntity).EncodedMeta
for _, ip := range em.Ips {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we probably want to remove this debug print :)

@mariomac
Copy link
Contributor Author

mariomac commented Dec 13, 2024

Good catch @grcevski ! The duplicity of messages might be because testservice is captured either via a Pod (name with suffix) and a Service (name without suffix), but I'll anyway check that the IPs do not collide and that the expiration is properly set.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants