id | title |
---|---|
silta-examples |
Silta examples |
The default values are documented here:
- Drupal chart: https://github.com/wunderio/charts/blob/master/drupal/values.yaml
- Frontend chart: https://github.com/wunderio/charts/blob/master/frontend/values.yaml
- Simple chart: https://github.com/wunderio/charts/blob/master/simple/values.yaml
Below is a list of examples for common needs.
All examples are meant to be used in the silta.yml
file of your project. Most of the examples work with both drupal chart and frontend chart, unless name is explicitly mentioned above the code snippet. Double-check with default value files for each chart - drupal and frontend.
Also note that increasing resources will result in increased costs, so use sensible values.
Drupal chart:
mariadb:
master:
persistence:
size: 2G
Note that storage can only be increased, not decreased.
Note 2: If you change it for existing deployment, You'll need to run special comands in cluster to expand the storage or deployment will fail (see Mariadb or Elasticsearch running out of disk space in troubleshooting page).
While it's normally not advised, it's possible to adjust MariaDB image version -
Drupal chart and Frontend chart:
mariadb:
image:
# Available image tags listed at https://hub.docker.com/r/bitnami/mariadb/tags. Use debian images.
# tag: 10.10.6-debian-11-r25
# tag: 10.11.5-debian-11-r24
tag: 11.0.3-debian-11-r25
It's highly suggested to create mysql data backup before image change.
Note: Do not change image to an earlier version, it may break the data.
Drupal chart:
mounts:
public-files:
mountPath: `/app/web/sites/my-other-location/files`
There is a pre-built mount template for drupal private file storage in silta (check values.yaml), you just have to enable it
Drupal chart:
mounts:
private-files:
enabled: true
Enabling this will mount shared storage to /app/private
and set $settings['file_private_path']
accordingly. See chart values for override parameters.
Drupal chart:
php:
cron:
drupal:
# Run every 5 minutes
schedule: "*/5 * * * *"
While Frontend chart was originally meant to host NodeJS frontend projects, it also allows running custom docker images and optionally exposing them via nginx reverse proxy. These containers are called "services" in Frontend chart.
In this example, we are setting up two custom services - "mynodeservice" that will use a custom built image (see circleci configuration below) and "mongo" that will use prebuilt mongodb docker imageservice.
Note: This ".Values.services.mongo" service is not the same as ".Values.mongodb", it's just an example.
Frontend chart:
services:
mynodeservice:
replicas: 1
port: 3000
env:
VARIABLE: "VALUE"
# Exposed at [hostname]/servicepath
exposedRoute: "/servicepath"
mongo:
# Mongo image does not need to be built,
# uses https://hub.docker.com/_/mongo
image: mongo
port: 27017
See .Values.serviceDefaults
for service default values.
Service images are built at .circleci/config.yaml
:
workflows:
build_deploy:
jobs:
- silta/frontend-build-deploy: &build-deploy
image_build_steps:
- silta/build-docker-image:
dockerfile: "silta/mynodeservice.Dockerfile"
path: "."
identifier: "mynodeservice"
It is very important to understand kubernetes containers are stateless, the moment container gets restarted, it will reset the storage to contents of docker image. To persist some particular filesystem path, you need to define persistent storage at .Values.mounts
and attach it to the service (this only applies to containers defined at .Values.services
since other applications (.Values.mongodb
, .Values.mariadb
, etc.) have default configurations in chart that persist data).
In this example, we are setting up a custom "mongo" service that will use prebuilt mongodb docker imageservice.
Note: This ".Values.services.mongo" service is not the same as ".Values.mongodb", it's just an example.
Frontend chart:
services:
mongo:
# Mongo image does not need to be built,
# uses https://hub.docker.com/_/mongo
image: mongo
port: 27017
mounts:
- mongodb-data
strategy:
type: Recreate
mounts:
mongodb-data:
enabled: true
storage: 5Gi
mountPath: /data/db
# GKE storage class
storageClassName: standard
accessModes: ReadWriteOnce
storageClassName
is only available on GKE. AWS and other cloud providers have different storageclasses, so it depends on cloud provider. There are several options and they differ by access (read / write) speed.standard
is a safe choice.accessModes
depends on storageClass.standard
on GKE providesReadWriteOnce
. See https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes for more information.Values.services.mongo.strategy.type: Recreate
is required for "read write once" type storage mounts, because they only allow mounting storage once, but default strategy for services isRollingUpdate
and it would fail deployment. See https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy for more information.
Drupal chart:
php:
cron:
my-custom-cron-job:
# Run a custom drush command at midnight
schedule: "0 0 * * *"
command: "drush my-custom-command"
Frontend chart:
services:
myservice:
cron:
my-custom-cron-job:
# Run a custom command at midnight
schedule: "0 0 * * *"
command: "my-custom-command"
Drupal chart:
php:
env:
MY_VARIABLE_NAME: "theValueOfMyVariable"
Frontend chart:
services:
myservice:
env:
MY_VARIABLE_NAME: "theValueOfMyVariable"
Drupal chart and Frontend chart:
nginx:
basicauth:
credentials:
username: hello
password: My$ecretP4ssw0rd
Drupal chart and Frontend chart:
elasticsearch:
enabled: true
Create a custom elasticsearch dockerfile to silta/elasticsearch.Dockerfile:
ARG ES_VERSION=7.17.0
FROM docker.elastic.co/elasticsearch/elasticsearch:${ES_VERSION}
ARG ES_VERSION
USER root
# Install Elasticsearch plugins
RUN elasticsearch-plugin install analysis-icu
USER elasticsearch
Build the custom Elasticsearch image in CircleCI:
When using silta/drupal-build-deploy
:
- silta/drupal-build-deploy:
pre-release:
- silta/build-docker-image:
dockerfile: silta/elasticsearch.Dockerfile
tag: with-plugins
identifier: elasticsearch
expose_image: false
When using silta/frontend-build-deploy
:
- silta/frontend-build-deploy:
image_build_steps:
- silta/build-docker-image:
dockerfile: silta/elasticsearch.Dockerfile
tag: with-plugins
identifier: elasticsearch
expose_image: false
Use the custom elasticsearch image in your silta helm charts file:
The container URL could be found in the CircleCI container build information.
elasticsearch:
enabled: true
image: <CONTAINER-URL>
imageTag: 'with-plugins'
imagePullPolicy: Always
Drupal chart:
memcached:
enabled: true
Adjust resources and arguments as needed
resources:
requests:
cpu: 150m
memory: 1200M
limits:
cpu: 250m
memory: 1500M
arguments:
- /run.sh
# MaxMemoryLimit, this should be less than the resources.limits.memory, or memcached will crash.
- -m 1200
# MaxItemSize
- -I 16M
Modify settings.php file (example is from D9)
/**
* Set the memcache server hostname when a memcached server is available.
*/
if (getenv("SILTA_CLUSTER") && getenv('MEMCACHED_HOST')) {
$settings['memcache']['servers'] = [getenv('MEMCACHED_HOST') . ':11211' => 'default'];
// Set the default cache backend to use memcache if memcache host is set and
// if one of the memcache libraries was found. Cache backends should not be
// set to memcache during installation. The existence of the memcache drupal
// module should also be checked but this is not possible until this issue
// has been fixed: https://www.drupal.org/project/drupal/issues/2766509
if (!InstallerKernel::installationAttempted() && (class_exists('Memcache', FALSE) || class_exists('Memcached', FALSE))) {
$settings['cache']['default'] = 'cache.backend.memcache';
}
/**
* Memcache configuration.
*/
if (class_exists('Memcached', FALSE)) {
$settings['memcache']['extension'] = 'Memcached';
// Memcached PECL Extension Support.
$settings['memcache']['options'] = [
// Enable compression for PHP 7.
\Memcached::OPT_COMPRESSION => TRUE,
\Memcached::OPT_DISTRIBUTION => \Memcached::DISTRIBUTION_CONSISTENT,
// Decrease latency.
\Memcached::OPT_TCP_NODELAY => TRUE,
];
}
}
For D7 use
if (getenv('MEMCACHED_HOST')) {
if (class_exists('Memcache', FALSE) || class_exists('Memcached', FALSE)) {
$conf['memcache_servers'] = [getenv('MEMCACHED_HOST') . ':11211' => 'default'];
}
}
Drupal chart:
varnish:
enabled: true
If extra cookies are needed, they can be defined in a vcl_extra_cookies variable:
varnish:
vcl_extra_cookies: |
if (req.http.Cookie ~ "extra_cookie_name") {
return (pass);
}
When varnish is enabled in silta config, drupal configuration needs to be adjusted, so purge can find the varnish server.
Using varnish module:
You should consider using purge module instead No adjustments needed
Using varnish_purge module:
- Add varnish purger to purge settings.
- Find purger configuration name. You can see it by hovering over the configuration link (i.e.
1b619ba479
). This will be Your<PURGER_ID>
. - Put this snippet into your
settings.php
file:
if (getenv('SILTA_CLUSTER') && getenv('VARNISH_ADMIN_HOST')) {
$config['varnish_purger.settings.<PURGER_ID>']['hostname'] = trim(getenv('VARNISH_ADMIN_HOST'));
$config['varnish_purger.settings.<PURGER_ID>']['port'] = '80';
}
Make sure to replace <PURGER_ID>
with an actual id of purger configuration!
Changing varnish default control-key value
This can be done by adding secret
variable.
varnish:
secret: "my-secret-key"
Please remember: best practice is to encrypt secrets.
Changing varnish cache backend
The current default cache backend is set to file storage. The setting is exposed in values file and can be changed. Here are few examples:
varnish:
resources:
requests:
memory: 768Mi
# Memory allocated storage. Make sure to adjust varnish memory allocation too (see above)
storageBackend: 'malloc,512m'
# Disc allocated storage.
storageBackend: 'file,/var/lib/varnish/varnish_storage.bin,512M'
By default, redis service does not set max memory value. You can do it by setting flags:
redis:
enabled: true
master:
persistence:
size: 2Gi
extraFlags:
- "--maxmemory-policy allkeys-lru"
- "--maxmemory 1700mb"
Drupal chart:
referenceData:
updateAfterDeployment: false
For some sites with a lot of files, taking a reference data dump after each deployment can cause the builds to time out. Disabling updateAfterDeployment
means new environments will be created with reference data from the previous nightly dump.
Note: There is no e-mail handling for frontend chart. You must implement the smtp workflow via application.
If you just want to test email, you can use mailhog:
Drupal chart:
mailhog:
enabled: true
Mailhog access information will be printed in release notes.
For emails to be actually sent out of the cluster, you can use any external smtp server. Here's an example for sparkpost.
Drupal chart:
smtp:
enabled: true
address: smtp.sparkpostmail.com:587 # or smtp.eu.sparkpost.com:587
tls: true
# When using smtp.office365.com:587 instead of sparkpost both tls and starttls need to be set to "YES".
# tls: "YES"
# starttls: "YES"
username: "SMTP_Injection"
# Encrypt this password. See: docs/encrypting_sensitive_configuration.md
# Please note that when using smtp.office365.com:587, password may not contain following special characters =, :, or #
password: "MYAPIKEY"
Note: To get the sparkpost API key, you have to validate your domain first.
Note 2: Because of long-standing bugs in the ssmtp package, the smtp password cannot contain the special characters #
, =
or :
.
If the smtp
is configured and enabled, but it does not appear to send anything, make sure mailhog
is not enabled.
All environments are given a hostname by default. It is possible to attach a custom domain name to environment by configuring exposeDomains
configuration parameter. All hostnames attached to environment are printed in release notes.
You can also use letsencrypt-staging
issuer to avoid hitting letsencrypt
rate limits.
!NB Deploy exposeDomains
entries only when DNS entries are changed or are soon to be changed. Otherwise, Letsencrypt validation might eventually get stuck due to retries.
!NB Put exposeDomains
in a dedicated configuration yaml file, so only one environment (branch) would be assigned this hostname. Having multiple environments with the same domain will act as a round robin load balancer for all environments and unexpected responses might be returned.
Drupal chart and Frontend chart:
exposeDomains:
example-le:
hostname: ssl-le.example.com
ssl:
enabled: true
issuer: letsencrypt
example-customcert:
hostname: ssl-custom.example.com
ssl:
enabled: true
issuer: custom
# Encrypt key and certificate. See: docs/encrypting_sensitive_configuration.md
key: |
-----BEGIN RSA PRIVATE KEY-----
<KEY>
-----END RSA PRIVATE KEY-----
crt: |
-----BEGIN CERTIFICATE-----
< DOMAIN CERTIFICATE >
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
< INTERMEDIATE CERTIFICATE >
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
< ROOT CA CERTIFICATE >
-----END CERTIFICATE-----
key
value is certificates private key.
crt
value is full chain of certificate.
ca
value is not required anymore for exposed domains.
See more information on how to convert and prepare SSL certificate for exposed domains
If you have same SSL certificate for multiple domains You can reuse ssl
block.
exposeDomains:
example-domain1: &shared-ssl
ssl:
[....]
example-domain2:
<<: *shared-ssl
example-domain3:
<<: *shared-ssl
You don't need a custom static ip (via gce ingress) normally, but if Your project requires, here's how -
exposeDomains:
example-gce-ingress:
hostname: gce-ingress.example.com
# see ingress.gce definition. This can also be a custom ingress too.
ingress: gce
ssl:
enabled: true
issuer: letsencrypt
ingress:
gce:
# Request a global static ip from OPS team first
staticIpAddressName: custom-ip-name
nginx:
# Reverse proxy IP's to trust with contents of X-Forwarded-For header
realipfrom:
gke-internal: 10.0.0.0/8
# Load Balancer IP (static ip you were given)
gce-lb-ip: 1.2.3.4/32;
# Depending on the cluster type, You might need to enable this.
# A safe default is "false" (works in both cases), but "VPC Native"
# clusters work more correcly with cluster.vpcNative set to "true".
cluster:
vpcNative: true
Redirects can be relative to current domain or contain full domain for more targeted redirects when multiple external domains (exposeDomains
) are attached to deployment, and you only need this redirect for a specific URL.
If you are scattering the redirect rules into separate yaml's, use keys (or the latter yaml will overwrite the whole nginx.redirects
object) and the alphabetical order of keys will be respected in the nginx redirect map. Because of this, it's better to put everything in one file without keys, just descriptions and the order of the yaml will be respected.
Each redirect has from
and to
keys, and an optional description
key, which does not do anything currently, it's a documentation comment for configuration maintainer.
By default, strings are matched using case-insensitive exact matching.
Regular expressions can be used by prefixing the value with ~
for a case-sensitive matching, or with ~*
for case-insensitive matching. Regular expressions can contain named and positional captures that can be referenced in the to
value.
Make sure to use proper anchors (^
and $
) and character escaping in regular expressions, to get exactly the match you want and nothing extra.
- Bad example:
from: '~/old-page
matches any string containing/old-page
, e.g./anypath/old-page
or/old-page/anypath
or even/valid/path?/old-page
. - Good example:
from: ~^/old-page/.+\.html
matches specifically path/old-page/*.html
.
Can include references to captured values from regular expressions, and special nginx variables like $request_uri
or $query_string
.
Drupal chart and Frontend chart:
nginx:
redirects:
- description: 'Redirect exact path match to another path on same the domain.'
from: '/old-page'
to: '/new-page'
- description: 'Redirect exact path match to another path on another the domain.'
from: '/old-page'
to: 'https://another-domain.example.com/new-page'
- description: 'Redirect exact url match to another path on same the domain. Note: Matching using https does not work because of SSL/TLS offloading.'
from: 'http://example.com/old-page'
to: '/new-page'
- description: 'Redirect all non-www requests to www, keeping the request path intact.'
from: '~^http://example\.com'
to: 'https://www.example.com$request_uri'
- description: 'Redirect exact url, matching both www and non-www.'
from: '~http://(www\.)?example\.com/old-page$'
to: '/new-page'
- description: 'Redirect case-insensitive url match.'
from: '~*http://www\.example\.com/oLd-pAgE$'
to: '/new-page'
- description: 'Redirect regex match, using positional capturing groups.'
from: '~^/old-articles/(.+)/view/(\d+)$'
to: '/new-articles/$1/?article_id=$2'
- description: 'Redirect regex match, using named capturing groups.'
from: '~^/old-articles/(?<date>.+)/view/(?<id>\d+)$'
to: '/new-articles/$date/?article_id=$id'
Resources in Silta charts are protected by Calico NetworkPolicy rules. Rules are defined in helm .Values.silta-release.ingressAccess
configuration object. There are few default rules that deny access to all pods in deployment from other deployments, but it is also possible to add extra [NetworkPolicy rules] (https://projectcalico.docs.tigera.io/security/policy-rules) to selecively allow access to deployment resources.
Here are few examples:
- Allowing access to pods from another namespace:
silta-release:
ingressAccess:
# Allow Frontend access to Drupal via internal connection
allow-drupal:
additionalPodSelector:
app: drupal
from:
- namespaceSelector:
matchLabels:
name: frontend-ns
- Allow direct elasticsearch access from frontend namespace
silta-release:
# Allow Frontend access to elasticsearch via internal connection
allowESaccess:
additionalPodSelector:
chart: elasticsearch
from:
- namespaceSelector:
matchLabels:
name: frontend-ns
- Allow CIDR access to service (routed connection only, does not work with NAT'ted connections)
silta-release:
# Allow Azure Application Gateway to drupal service
ingressAccess:
CustomAzureAppGWAccess:
from:
- ipBlock:
cidr: 1.2.3.4/5
Drupal chart builds nginx container using web/ folder as build context. This prevents files being included from outside the web folder and it's not a good idea to put config files under it.
To be able to add include files the build context needs to be changed from web/
into .
by passing nginx_build_context: "."
to drupal-docker-build
in .circleci/config.yml
:
jobs:
- silta/drupal-docker-build:
nginx_build_context: "."
Due root containing Drupal / shell container compatible .dockerignore file and for frontend there is a separate one inside the web/ folder this doesn't work anymore. Since version 19.03 Docker supports separate .dockerignore files for each Dockerfile. This requires Docker build to be made with BuildKit enabled. To enable BuildKit just pass the DOCKER_BUILDKIT=1
to the build environment as an environment variable:
environment:
DOCKER_BUILDKIT: 1
The ignore file itself needs to be named the same as the Dockerfile with .dockerignore appended to the end and need to reside at the same place as the Dockerfile:
cp web/.gitignore silta/nginx.Dockerfile.dockerignore
Note: our validation checks if the .dockerignore is present under web/ so you can either leave it there or just add an empty file in it's place.
To make the image to build correctly in this new context you need to update the COPY command in the nginx.Dockerfile to copy web
instead of .
and also add COPY commands to any custom config files you want to be able to include:
COPY silta/nginx.serverextra.conf /etc/nginx
COPY web /app/web
Now you can include the config file in silta.yml like this:
nginx:
serverExtraConfig: |
include nginx.serverextra.conf;
or if you COPY
the file under /etc/nginx/conf.d
they will be included automatically without the need to add them to silta.yml configs.
Having e.g. Storybook or other frontend application included in the base project codebase that require separate deployment can be easily done even using different chart. See https://wunderio.github.io/silta/docs/circleci-conf-examples for the deployment setup part.
When using different charts (e.g. drupal and simple) you need to separate chart specific configurations to their own silta-*.yml files if you want to share any configs between the application deployments (for example basic auth credentials). Best way to do it is to put only the shared configurations to the silta.yml file and have e.g. silta-cms.yml and silta-storybook.yml for application specific configurations.
- In
silta
folder, createextra_charts.yml
which contains list of subcharts to add.
Following examples add a redis subchart to drupal chart deployment.
charts:
- name: redis
version: 16.8.x
repository: https://charts.bitnami.com/bitnami
condition: redis.enabled
To use a local subchart, replace repository link with file://<path>/<to>/<subchart>
- Add these 2 parameters to
drupal-build-deploy
CircleCI job:
- silta/drupal-build-deploy:
source_chart: wunderio/drupal
extension_file: silta/extra_charts.yml
- If desired, modify variables for the subchart in
silta.yml
under the key of subcharts' name. For example above, it'sredis
.
[..]
redis:
enabled: true
auth:
password: test
Sets redis password to test
Notice the condition
key in extra_charts.yml
for the redis subchart. It makes it possible to deploy this subchart conditionally, when redis: enabled
is passed in silta.yml
.
Delete the condition: redis.enabled
line if you want this subchart installed in all your future deployments, regardless of settings in silta.yml
.