-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
allow restricting Kube metadata to local node only #1440
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1440 +/- ##
===========================================
- Coverage 80.97% 63.22% -17.76%
===========================================
Files 149 145 -4
Lines 15255 15140 -115
===========================================
- Hits 12353 9572 -2781
- Misses 2293 4895 +2602
- Partials 609 673 +64
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Hello @mariomac!
Please, if the current pull request addresses a bug fix, label it with the |
This PR must be merged before a backport PR will be created. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! I think the test failure might be related to aggressive expiration of metrics, or maybe confusion of what the name should be. I see an expiration of these values, where we have testserver-....
2024-12-12T16:40:35.912684579Z stdout F time=2024-12-12T16:40:35.912Z level=DEBUG msg="storing new metric label set" component=otel.Expirer type=*metric.int64Inst labelValues="[172.18.0.2 43558 request 10.244.0.5 10.244.0.0/16 testserver-858cdf668b-xpnlx 8080 egress my-kube testserver-858cdf668b-xpnlx default 172.18.0.2 test-kind-cluster-netolly-control-plane testserver Deployment Pod internal-pinger-net default 172.18.0.2 test-kind-cluster-netolly-control-plane internal-pinger-net Pod Pod 8080 10.244.0.9 10.244.0.0/16 internal-pinger-net TCP]"
However the next label is recorded with testserver
as a name, i.e. no dash something:
2024-12-12T16:41:02.912577238Z stdout F time=2024-12-12T16:41:02.912Z level=DEBUG msg="storing new metric label set" component=otel.Expirer type=*metric.int64Inst labelValues="[172.18.0.2 43558 request 10.96.94.38 10.96.0.0/16 testserver 8080 egress my-kube testserver default testserver Service Service internal-pinger-net default 172.18.0.2 test-kind-cluster-netolly-control-plane internal-pinger-net Pod Pod 8080 10.244.0.9 10.244.0.0/16 internal-pinger-net TCP]"
All subsequent labels are without the dash...
pkg/kubecache/meta/informers_init.go
Outdated
@@ -470,6 +521,10 @@ func (inf *Informers) ipInfoEventHandler(ctx context.Context) *cache.ResourceEve | |||
return &cache.ResourceEventHandlerFuncs{ | |||
AddFunc: func(obj interface{}) { | |||
metrics.InformerNew() | |||
em := obj.(*indexableEntity).EncodedMeta | |||
for _, ip := range em.Ips { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we probably want to remove this debug print :)
Good catch @grcevski ! The duplicity of messages might be because testservice is captured either via a Pod (name with suffix) and a Service (name without suffix), but I'll anyway check that the IPs do not collide and that the expiration is properly set. |
Adds the
BEYLA_KUBE_META_RESTRICT_LOCAL_NODE
configuration option that allows configuring the local informer to only watch the Kubernetes Pods from the local node. This will alleviate the memory load, especially during startup.