Skip to content

Commit

Permalink
Merge pull request #129 from willingc/test-build
Browse files Browse the repository at this point in the history
Improve doc build and docstrings
  • Loading branch information
yuvipanda authored Feb 1, 2018
2 parents 5fb5040 + 96eb6ee commit 8db219f
Show file tree
Hide file tree
Showing 10 changed files with 143 additions and 157 deletions.
Empty file added docs/source/_static/.gitkeep
Empty file.
3 changes: 1 addition & 2 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,7 @@
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.intersphinx',
'sphinx.ext.napoleon',
'autodoc_traits',
]

Expand Down Expand Up @@ -174,7 +175,5 @@
]




# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {'https://docs.python.org/': None}
2 changes: 1 addition & 1 deletion docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,9 @@ Kubespawner
:maxdepth: 2

overview.md
spawner
objects
reflector
spawner
traitlets
utils

Expand Down
17 changes: 9 additions & 8 deletions docs/source/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,26 +21,27 @@ simultaneous users), Kubernetes is a wonderful way to do it. Features include:
monitoring and failover for the hub process itself.

* Spawn multiple hubs in the same kubernetes cluster, with support for
[namespaces](http://kubernetes.io/docs/admin/namespaces/). You can limit the
[namespaces](https://kubernetes.io/docs/tasks/administer-cluster/namespaces/). You can limit the
amount of resources each namespace can use, effectively limiting the amount
of resources a single JupyterHub (and its users) can use. This allows
organizations to easily maintain multiple JupyterHubs with just one
kubernetes cluster, allowing for easy maintenance & high resource
utilization.

* Provide guarantees and limits on the amount of resources (CPU / RAM) that
single-user notebooks can use. Kubernetes has comprehensive [resource control](http://kubernetes.io/docs/user-guide/compute-resources/) that can
single-user notebooks can use. Kubernetes has comprehensive [resource control](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) that can
be used from the spawner.

* Mount various types of [persistent volumes](http://kubernetes.io/docs/user-guide/persistent-volumes/)
* Mount various types of
[persistent volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)
onto the single-user notebook's container.

* Control various security parameters (such as userid/groupid, SELinux, etc)
via flexible [Pod Security Policies](http://kubernetes.io/docs/user-guide/pod-security-policy/).
via flexible [Pod Security Policies](https://kubernetes.io/docs/concepts/policy/pod-security-policy/).

* Run easily in multiple clouds (or on your own machines). Helps avoid vendor
lock-in. You can even spread out your cluster across
[multiple clouds at the same time](http://kubernetes.io/docs/user-guide/federation/).
[multiple clouds at the same time](https://kubernetes.io/docs/concepts/cluster-administration/federation/).

In general, Kubernetes provides a ton of well thought out, useful features -
and you can use all of them along with this spawner.
Expand All @@ -51,13 +52,13 @@ and you can use all of them along with this spawner.

Everything should work from Kubernetes v1.2+.

The [Kube DNS addon](http://kubernetes.io/docs/user-guide/connecting-applications/#dns)
The [Kube DNS addon](https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#dns)
is not strictly required - the spawner uses
[environment variable](http://kubernetes.io/docs/user-guide/connecting-applications/#environment-variables)
[environment variable](https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#environment-variables)
based discovery instead. Your kubernetes cluster will need to be configured to
support the types of volumes you want to use.

If you are just getting started and want a kubernetes cluster to play with,
[Google Container Engine](https://cloud.google.com/container-engine/) is
[Google Container Engine](https://cloud.google.com/kubernetes-engine/) is
probably the nicest option. For AWS/Azure,
[kops](https://github.com/kubernetes/kops) is probably the way to go.
9 changes: 1 addition & 8 deletions docs/source/reflector.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,4 @@ Module: :mod:`kubespawner.reflector`

.. automodule:: kubespawner.reflector

.. currentmodule:: kubespawner.reflector


:class:`PodReflector`
---------------------

.. autoconfigurable:: PodReflector
:members:
.. autoclass:: kubespawner.reflector.NamespacedResourceReflector
3 changes: 0 additions & 3 deletions docs/source/spawner.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,7 @@ Module: :mod:`kubespawner.spawner`

.. automodule:: kubespawner.spawner

.. currentmodule:: kubespawner.spawner

:class:`KubeSpawner`
--------------------

.. autoconfigurable:: KubeSpawner
:members:
8 changes: 0 additions & 8 deletions docs/source/traitlets.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,3 @@ Module: :mod:`kubespawner.traitlets`
------------------------------------

.. automodule:: kubespawner.traitlets

.. currentmodule:: kubespawner.traitlets

:class:`Callable`
-----------------

.. autoclass:: Callable
:members:
10 changes: 1 addition & 9 deletions docs/source/utils.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,12 +7,4 @@ Module: :mod:`kubespawner.utils`

.. automodule:: kubespawner.utils

.. currentmodule:: kubespawner.utils

.. autofunction:: request_maker

.. autofunction:: request_maker_serviceaccount

.. autofunction:: request_maker_kubeconfig

.. autofunction:: k8s_url
.. autofunction:: kubespawner.utils.generate_hashed_slug
66 changes: 34 additions & 32 deletions kubespawner/objects.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,77 +53,78 @@ def make_pod(
"""
Make a k8s pod specification for running a user notebook.
Parameters:
- name:
Parameters
----------
name:
Name of pod. Must be unique within the namespace the object is
going to be created in. Must be a valid DNS label.
- image_spec:
image_spec:
Image specification - usually a image name and tag in the form
of image_name:tag. Same thing you would use with docker commandline
arguments
- image_pull_policy:
image_pull_policy:
Image pull policy - one of 'Always', 'IfNotPresent' or 'Never'. Decides
when kubernetes will check for a newer version of image and pull it when
running a pod.
- image_pull_secret:
image_pull_secret:
Image pull secret - Default is None -- set to your secret name to pull
from private docker registry.
- port:
port:
Port the notebook server is going to be listening on
- cmd:
cmd:
The command used to execute the singleuser server.
- node_selector:
node_selector:
Dictionary Selector to match nodes where to launch the Pods
- run_as_uid:
run_as_uid:
The UID used to run single-user pods. The default is to run as the user
specified in the Dockerfile, if this is set to None.
- fs_gid
fs_gid
The gid that will own any fresh volumes mounted into this pod, if using
volume types that support this (such as GCE). This should be a group that
the uid the process is running as should be a member of, so that it can
read / write to the volumes mounted.
- run_privileged:
run_privileged:
Whether the container should be run in privileged mode.
- env:
env:
Dictionary of environment variables.
- volumes:
volumes:
List of dictionaries containing the volumes of various types this pod
will be using. See k8s documentation about volumes on how to specify
these
- volume_mounts:
volume_mounts:
List of dictionaries mapping paths in the container and the volume(
specified in volumes) that should be mounted on them. See the k8s
documentaiton for more details
- working_dir:
working_dir:
String specifying the working directory for the notebook container
- labels:
labels:
Labels to add to the spawned pod.
- annotations:
annotations:
Annotations to add to the spawned pod.
- cpu_limit:
cpu_limit:
Float specifying the max number of CPU cores the user's pod is
allowed to use.
- cpu_guarentee:
cpu_guarentee:
Float specifying the max number of CPU cores the user's pod is
guaranteed to have access to, by the scheduler.
- mem_limit:
mem_limit:
String specifying the max amount of RAM the user's pod is allowed
to use. String instead of float/int since common suffixes are allowed
- mem_guarantee:
mem_guarantee:
String specifying the max amount of RAM the user's pod is guaranteed
to have access to. String ins loat/int since common suffixes
are allowed
- lifecycle_hooks:
lifecycle_hooks:
Dictionary of lifecycle hooks
- init_containers:
init_containers:
List of initialization containers belonging to the pod.
- service_account:
service_account:
Service account to mount on the pod. None disables mounting
- extra_container_config:
extra_container_config:
Extra configuration (e.g. envFrom) for notebook container which is not covered by parameters above.
- extra_pod_config:
extra_pod_config:
Extra configuration (e.g. tolerations) for pod which is not covered by parameters above.
- extra_containers:
extra_containers:
Extra containers besides notebook container. Used for some housekeeping jobs (e.g. crontab).
"""

Expand Down Expand Up @@ -254,15 +255,16 @@ def make_pvc(
"""
Make a k8s pvc specification for running a user notebook.
Parameters:
- name:
Parameters
----------
name:
Name of persistent volume claim. Must be unique within the namespace the object is
going to be created in. Must be a valid DNS label.
- storage_class
storage_class:
String of the name of the k8s Storage Class to use.
- access_modes:
access_modes:
A list of specifying what access mode the pod should have towards the pvc
- storage
storage:
The ammount of storage needed for the pvc
"""
Expand Down
Loading

0 comments on commit 8db219f

Please sign in to comment.