-
Notifications
You must be signed in to change notification settings - Fork 304
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Templatize the notebook container name #805
base: main
Are you sure you want to change the base?
Conversation
Thanks for submitting your first pull request! You are awesome! 🤗 |
10f4b59
to
ea2cc43
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR, @jabbera!
I think this should get treated the same as pod_name
, for the same reason. It doesn't get cleared on stop, but if you change the config, it sets back to the right version again the next time it starts. So I think it's mostly ok?
I do think we should work on clearing out the state on pod stops just for cleanliness, but I'd rather have us do that instead of put more things in labels.
So I'd say:
- for this PR, treat
notebook_container_name
same aspod_name
- In a separate PR, deal with cleaning out pod state on stop
How does that sound?
This inspired me to open #806 as well :) |
Hmmm, I understand the point about this, but I'm hesitant to add config and the associated complexity to adjust the container name based on this. Adding it would for example start to break dashboards by https://github.com/jupyterhub/grafana-dashboards that assumes the container name is fixed. There are labels and annotations related to the username? Can those be used to filter out the logs for a specific user pod in AKS logs query system? For GKE that is possible, as described a bit here for example https://infrastructure.2i2c.org/howto/troubleshoot/logs/#look-at-a-specific-user-s-logs. |
Here is what I'm dealing with, not many filters either. There are certainly more comprehensive tools I could add that might have more bells and whistles, but this seemed like a generally useful feature IMO: No worries if you don't want to merge it, but can you work it out with @yuvipanda before I make the state changes? |
I think it's helpful to follow what other Kubernetes applications do where reasonable. Do you have any examples of how other applications such as Helm charts handle the naming of containers? |
I don't think you can compare the two. Almost all other charts, the pods are associated with a deployment/statefullset/relicaset/daemon set etc (as are the core charts deployed by the helm chart). It's precisely because these are just pods that use the scheduler directly as opposed to any of the higher level resource types I added this. |
Yeah, and the UX for logging is similar on GCP too.
This is why the container logging UXes are like this as well. We're an outlier. I think with my suggested changes the amount of complexity this adds is pretty small, especially since the default does nothing different. |
Just checking in to see if there is consensus from the maintainers before I make the requested update. Cheers! |
Commenting on this again to see if there is interest! |
I still think this is worth adding, given #805 (review). |
I'm willing to merge this if you make the suggested changes, @jabbera :) |
@yuvipanda I've got a deliverable for work I'm furiously working on. I'll try to get back to this early July! |
Many dashboards in the AKS environment (Grafana and Container Insights) categorize and filter things by container name. It would be useful to be able to have the username/servername with "notebook" to make things more searchable etc. I understand that user namespaces could also be used for this but our cluster serves multiple applications, and we can't provide the permissions required for that.
I could do this with spawner state persisted to the database, but I don't want this state to persist between stop/starts. It only matters for the case of locating a pod when the hub is restarted and the template has changed. I could, in theory, clean up the db state in stop as an alternative?