Skip to content

Commit

Permalink
[docs] Editing typos of the sds-local-volume module and general refac…
Browse files Browse the repository at this point in the history
…toring of docs (#92)

Signed-off-by: mortelumina98 <[email protected]>
Signed-off-by: Artem Kladov <[email protected]>
Signed-off-by: Aleksandr Zimin <[email protected]>
Co-authored-by: Artem Kladov <[email protected]>
Co-authored-by: Artem Kladov <[email protected]>
Co-authored-by: Aleksandr Zimin <[email protected]>
  • Loading branch information
4 people authored Dec 6, 2024
1 parent 9b2fcad commit 36e3e24
Show file tree
Hide file tree
Showing 4 changed files with 572 additions and 518 deletions.
129 changes: 68 additions & 61 deletions docs/FAQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ description: "The sds-local-volume module: FAQ"
- LVM is simpler and has high performance that is similar to that of native disk drives;
- LVMThin allows overprovisioning; however, it is slower than LVM.

{{< alert level="warning" >}}
{{< alert level="warning" >}}
Overprovisioning in LVMThin should be used with caution, monitoring the availability of free space in the pool (The cluster monitoring system generates separate events when the free space in the pool reaches 20%, 10%, 5%, and 1%).

In case of no free space in the pool, degradation in the module's operation as a whole will be observed, and there is a real possibility of data loss!
Expand Down Expand Up @@ -65,7 +65,9 @@ nodeSelector:

Nodes whose labels include the set specified in the settings are selected by the module as targets for usage. Therefore, by changing the `nodeSelector` field, you can influence the list of nodes that the module will use.

> Please note that the `nodeSelector` field can contain any number of labels, but it's crucial that each of the specified labels is present on the node you intend to use for working with the module. It's only when all the specified labels are present on the selected node that the `sds-local-volume-csi-node` pod will be launched.
{{< alert level="warning" >}}
The `nodeSelector` field can contain any number of labels, but it's crucial that each of the specified labels is present on the node you intend to use for working with the module. It's only when all the specified labels are present on the selected node that the `sds-local-volume-csi-node` pod will be launched.
{{< /alert >}}

After adding labels to the nodes, the `sds-local-volume-csi-node` pods should be started. You can check their presence using the command:

Expand All @@ -84,6 +86,7 @@ kubectl -n d8-sds-local-volume get po -owide
If the pod is missing, please ensure that all labels specified in the module settings in the `nodeSelector` field are present on the selected node. More details about this can be found [here](#service-pods-for-the-sds-local-volume-components-are-not-being-created-on-the-node-i-need-why-is-that).

## How do I take a node out of the module's control?

To take a node out of the module's control, you need to remove the labels specified in the `nodeSelector` field in the module settings for `sds-local-volume`.

You can check the presence of existing labels in the `nodeSelector` using the command:
Expand All @@ -104,7 +107,10 @@ Remove the labels specified in `nodeSelector` from the desired nodes.
```shell
kubectl label node %node-name% %label-from-selector%-
```
> Please note that to remove a label, you need to add a hyphen immediately after its key instead of its value.

{{< alert level="warning" >}}
To remove a label, you need to add a hyphen immediately after its key instead of its value.
{{< /alert >}}

As a result, the `sds-local-volume-csi-node` pod should be deleted from the desired node. You can check its status using the command:

Expand All @@ -122,86 +128,87 @@ kubectl get node %node-name% --show-labels

If the labels from `nodeSelector` are not present on the node, ensure that this node does not own any `LVMVolumeGroup` resources used by `LocalStorageClass` resources. More details about this check can be found [here](#how-to-check-if-there-are-dependent-resources-lvmvolumegroup-on-the-node).


> Please note that on the `LVMVolumeGroup` and `LocalStorageClass` resources, which prevent the node from being taken out of the module's control, the label `storage.deckhouse.io/sds-local-volume-candidate-for-eviction` will be displayed.
{{< alert level="warning" >}}
On the `LVMVolumeGroup` and `LocalStorageClass` resources, which prevent the node from being taken out of the module's control, the label `storage.deckhouse.io/sds-local-volume-candidate-for-eviction` will be displayed.
On the node itself, the label `storage.deckhouse.io/sds-local-volume-need-manual-eviction` will be present.

{{< /alert >}}

## How to check if there are dependent resources `LVMVolumeGroup` on the node?

To check for such resources, follow these steps:
1. Display the existing `LocalStorageClass` resources

```shell
kubectl get lsc
```
```shell
kubectl get lsc
```

2. Check each of them for the list of used `LVMVolumeGroup` resources.

> If you want to list all `LocalStorageClass` resources at once, run the command:
>
> ```shell
> kubectl get lsc -oyaml
> ```

```shell
kubectl get lsc %lsc-name% -oyaml
```

An approximate representation of `LocalStorageClass` could be:

```yaml
apiVersion: v1
items:
- apiVersion: storage.deckhouse.io/v1alpha1
kind: LocalStorageClass
metadata:
finalizers:
- localstorageclass.storage.deckhouse.io
name: test-sc
spec:
lvm:
lvmVolumeGroups:
- name: test-vg
type: Thick
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
status:
phase: Created
kind: List
```

> Please pay attention to the `spec.lvm.lvmVolumeGroups` field - it specifies the used `LVMVolumeGroup` resources.
> If you want to list all `LocalStorageClass` resources at once, run the command:
>
> ```shell
> kubectl get lsc -oyaml
> ```

```shell
kubectl get lsc %lsc-name% -oyaml
```

An approximate representation of `LocalStorageClass` could be:

```yaml
apiVersion: v1
items:
- apiVersion: storage.deckhouse.io/v1alpha1
kind: LocalStorageClass
metadata:
finalizers:
- localstorageclass.storage.deckhouse.io
name: test-sc
spec:
lvm:
lvmVolumeGroups:
- name: test-vg
type: Thick
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
status:
phase: Created
kind: List
```

> Please pay attention to the `spec.lvm.lvmVolumeGroups` field - it specifies the used `LVMVolumeGroup` resources.

3. Display the list of existing `LVMVolumeGroup` resources.

```shell
kubectl get lvg
```
```shell
kubectl get lvg
```

An approximate representation of `LVMVolumeGroup` could be:
An approximate representation of `LVMVolumeGroup` could be:

```text
NAME HEALTH NODE SIZE ALLOCATED SIZE VG AGE
lvg-on-worker-0 Operational node-worker-0 40956Mi 0 test-vg 15d
lvg-on-worker-1 Operational node-worker-1 61436Mi 0 test-vg 15d
lvg-on-worker-2 Operational node-worker-2 122876Mi 0 test-vg 15d
lvg-on-worker-3 Operational node-worker-3 307196Mi 0 test-vg 15d
lvg-on-worker-4 Operational node-worker-4 307196Mi 0 test-vg 15d
lvg-on-worker-5 Operational node-worker-5 204796Mi 0 test-vg 15d
```
```text
NAME HEALTH NODE SIZE ALLOCATED SIZE VG AGE
lvg-on-worker-0 Operational node-worker-0 40956Mi 0 test-vg 15d
lvg-on-worker-1 Operational node-worker-1 61436Mi 0 test-vg 15d
lvg-on-worker-2 Operational node-worker-2 122876Mi 0 test-vg 15d
lvg-on-worker-3 Operational node-worker-3 307196Mi 0 test-vg 15d
lvg-on-worker-4 Operational node-worker-4 307196Mi 0 test-vg 15d
lvg-on-worker-5 Operational node-worker-5 204796Mi 0 test-vg 15d
```

4. Ensure that the node you intend to remove from the module's control does not have any `LVMVolumeGroup` resources used in `LocalStorageClass` resources.

> To avoid unintentionally losing control over volumes already created using the module, the user needs to manually delete dependent resources by performing necessary operations on the volume.
To avoid unintentionally losing control over volumes already created using the module, the user needs to manually delete dependent resources by performing necessary operations on the volume.

## I removed the labels from the node, but the `sds-local-volume-csi-node` pod is still there. Why did this happen?

## I removed the labels from the node, but the `sds-local-volume-csi-node` pod is still there. Why did this happen?
Most likely, there are `LVMVolumeGroup` resources present on the node, which are used in one of the `LocalStorageClass` resources.

To avoid unintentionally losing control over volumes already created using the module, the user needs to manually delete dependent resources by performing necessary operations on the volume."

The process of checking for the presence of the aforementioned resources is described [here](#how-to-check-if-there-are-dependent-resources-lvmvolumegroup-on-the-node).


## Service pods for the `sds-local-volume` components are not being created on the node I need. Why is that?

With a high probability, the issues are related to the labels on the node.
Expand Down Expand Up @@ -237,7 +244,7 @@ nodeSelector:
my-custom-label-key: my-custom-label-value
```

> The output of this command should include all labels from the settings of the `data.nodeSelector` module, as well as `kubernetes.io/os: linux`.
The output of this command should include all labels from the settings of the `data.nodeSelector` module, as well as `kubernetes.io/os: linux`.

Check the labels on the node you need:

Expand Down Expand Up @@ -377,4 +384,4 @@ To check the status of the created snapshot, execute the command:
kubectl get volumesnapshot
```

This command will display a list of all snapshots and their current status.
This command will display a list of all snapshots and their current status.
Loading

0 comments on commit 36e3e24

Please sign in to comment.