diff --git a/.prettierignore b/.prettierignore index 070ddcc82..23f1068ec 100644 --- a/.prettierignore +++ b/.prettierignore @@ -1,5 +1,5 @@ /.github/actions/ /.github/workflows/lint.yml /.github/workflows/tests-*.yml -.github/workflows/tests-*.yaml +/.github/workflows/tests-*.yaml /charts/ diff --git a/README.md b/README.md index ea6587b07..c215550f5 100644 --- a/README.md +++ b/README.md @@ -26,44 +26,47 @@ # ParadeDB Helm Chart -The [ParadeDB](https://github.com/paradedb/paradedb) Helm Chart is based on the official [CloudNativePG Helm Chart](https://cloudnative-pg.io/). CloudNativePG is a Kubernetes operator that manages the full lifecycle of a highly available PostgreSQL database cluster with a primary/standby architecture using Postgres streaming replication. +The [ParadeDB](https://github.com/paradedb/paradedb) Helm Chart is based on the official [CloudNativePG Helm Chart](https://cloudnative-pg.io/). CloudNativePG is a Kubernetes operator that manages the full lifecycle of a highly available PostgreSQL database cluster with a primary/standby architecture using Postgres streaming (physical) replication. Kubernetes, and specifically the CloudNativePG operator, is the recommended approach for deploying ParadeDB in production, with high availability. ParadeDB also provides a [Docker image](https://hub.docker.com/r/paradedb/paradedb) and [prebuilt binaries](https://github.com/paradedb/paradedb/releases) for Debian, Ubuntu and Red Hat Enterprise Linux. +The ParadeDB Helm Chart supports Postgres 13+ and ships with Postgres 16 by default. + The chart is also available on [Artifact Hub](https://artifacthub.io/packages/helm/paradedb/paradedb). -## Getting Started +## Usage + +### ParadeDB Bring-Your-Own-Cloud (BYOC) + +The most reliable way to run ParadeDB in production is with ParadeDB BYOC, an end-to-end managed solution that runs in the customer’s cloud account. It deploys on managed Kubernetes services and uses the ParadeDB Helm Chart. + +ParadeDB BYOC includes built-in integration with managed PostgreSQL services, such as AWS RDS, via logical replication. It also provides monitoring, logging and alerting through Prometheus and Grafana. The ParadeDB team manages the underlying infrastructure and lifecycle of the cluster. + +You can read more about the optimal architecture for running ParadeDB in production [here](https://docs.paradedb.com/deploy/overview), and you can contact sales [here](mailto:sales@paradedb.com). + +### Self-Hosted First, install [Helm](https://helm.sh/docs/intro/install/). The following steps assume you have a Kubernetes cluster running v1.25+. If you are testing locally, we recommend using [Minikube](https://minikube.sigs.k8s.io/docs/start/). -### Installing the Prometheus Stack +#### (Optional) Monitoring -The ParadeDB Helm chart supports monitoring via Prometheus and Grafana. To enable this, you need to have the Prometheus CRDs installed before installing the CloudNativePG operator. If you do not yet have the Prometheus CRDs installed on your Kubernetes cluster, you can install it with: +The ParadeDB Helm chart supports monitoring via Prometheus and Grafana. To enable this, you need to have the Prometheus CRDs installed before installing the CloudNativePG operator. The Promotheus CRDs can be found [here](https://prometheus-community.github.io/helm-charts). -```bash -helm repo add prometheus-community https://prometheus-community.github.io/helm-charts -helm upgrade --atomic --install prometheus-community \ ---create-namespace \ ---namespace prometheus-community \ ---values https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/main/docs/src/samples/monitoring/kube-stack-config.yaml \ -prometheus-community/kube-prometheus-stack -``` +TODO: Add link to Prometheus/Grafana docs and mention that it needs to be passed in to the cnpg cluster deployment. -### Installing the CloudNativePG Operator +#### Installing the CloudNativePG Operator -Skip this step if the CloudNativePG operator is already installed in your cluster. If you do not wish to monitor your cluster, omit the `--set` commands. +Skip this step if the CloudNativePG operator is already installed in your cluster. T ```bash helm repo add cnpg https://cloudnative-pg.github.io/charts helm upgrade --atomic --install cnpg \ --create-namespace \ --namespace cnpg-system \ ---set monitoring.podMonitorEnabled=true \ ---set monitoring.grafanaDashboard.create=true \ cnpg/cloudnative-pg ``` -### Setting up a ParadeDB CNPG Cluster +#### Setting up a ParadeDB CNPG Cluster Create a `values.yaml` and configure it to your requirements. Here is a basic example: @@ -85,40 +88,28 @@ helm upgrade --atomic --install paradedb \ --namespace paradedb \ --create-namespace \ --values values.yaml \ ---set cluster.monitoring.enabled=true \ paradedb/paradedb ``` If `--values values.yaml` is omitted, the default values will be used. For additional configuration options for the `values.yaml` file, including configuring backups and PgBouncer, please refer to the [ParadeDB Helm Chart documentation](https://artifacthub.io/packages/helm/paradedb/paradedb#values). For advanced cluster configuration options, please refer to the [CloudNativePG Cluster Chart documentation](charts/paradedb/README.md). -### Connecting to a ParadeDB CNPG Cluster +#### Connecting to a ParadeDB CNPG Cluster -The command to connect to the primary instance of the cluster will be printed in your terminal. If you do not modify any settings, it will be: +The command to connect to the primary instance of the cluster will be printed in your terminal. You can also connect to a specific pod via: ```bash -kubectl --namespace paradedb exec --stdin --tty services/paradedb-rw -- bash +kubectl exec --stdin --tty -n -- bash ``` -This will launch a Bash shell inside the instance. You can connect to the ParadeDB database via `psql` with: +The primary is `paradedb-1`, and the replicas are `paradedb-2` onwards depending on the number of replicas you configured. This will launch a Bash shell inside the instance. You can connect to the ParadeDB database via `psql` with: ```bash psql -d paradedb ``` -### Connecting to the Grafana Dashboard - -To connect to the Grafana dashboard for your cluster, we suggested port forwarding the Kubernetes service running Grafana to localhost: - -```bash -kubectl --namespace prometheus-community port-forward svc/prometheus-community-grafana 3000:80 -``` - -You can then access the Grafana dasbhoard at [http://localhost:3000/](http://localhost:3000/) using the credentials `admin` as username and `prom-operator` as password. These default credentials are -defined in the [`kube-stack-config.yaml`](https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/main/docs/src/samples/monitoring/kube-stack-config.yaml) file used as the `values.yaml` file in [Installing the Prometheus CRDs](#installing-the-prometheus-stack) and can be modified by providing your own `values.yaml` file. - ## Development -To test changes to the Chart on a local Minikube cluster, follow the instructions from [Getting Started](#getting-started), replacing the `helm upgrade` step by the path to the directory of the modified `Chart.yaml`. +To test changes to the Chart on a local Minikube cluster, follow the instructions from [Self Hosted](#self-hosted), replacing the `helm upgrade` step by the path to the directory of the modified `Chart.yaml`. ```bash helm upgrade --atomic --install paradedb --namespace paradedb --create-namespace ./charts/paradedb