-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add template for erpnext #3882
base: main
Are you sure you want to change the base?
Conversation
Thanks for this @barredterra ! |
This compose file is a simple set of containers required to run ERPNext. Two services out of these files are ideally run only once. There is a service to Create site will not run multiple times, it will fail if it finds same site already created. The configure service will run on every restart. It can be kept as it is and use it to change hosts on restarts or a script can be added that check if config is already set and skips re-setting the config.
I've added following service in docker swarm stack. It never starts. It can be triggered via CI or webhook when you need to upgrade. migration:
image: frappe/erpnext:${VERSION}
deploy:
restart_policy:
condition: none
entrypoint: ["bash", "-c"]
command:
- |
bench --site all set-config -p maintenance_mode 1
bench --site all set-config -p pause_scheduler 1
bench --site all migrate
bench --site all set-config -p maintenance_mode 0
bench --site all set-config -p pause_scheduler 0
volumes:
- sites:/home/frappe/frappe-bench/sites For this case it can have a script that checks if site is running on the versions of apps in image and only run the command if there is a diff in any version. This will ensure the
health checks are needed?
For my traefik setup done like this version: '3.7'
services:
traefik:
image: traefik:${TRAEFIK_VERSION:-v2.9}
ports:
- target: 80
published: 80
mode: host
- target: 443
published: 443
mode: host
deploy:
placement:
constraints:
# Required for the TLS certificates
- node.labels.traefik-public.traefik-public-certificates == true
labels:
- traefik.enable=true
- traefik.docker.network=traefik-public
- traefik.constraint-label=traefik-public
- traefik.http.middlewares.admin-auth.basicauth.users=admin:${HASHED_PASSWORD?Variable not set}
- traefik.http.middlewares.https-redirect.redirectscheme.scheme=https
- traefik.http.middlewares.https-redirect.redirectscheme.permanent=true
- traefik.http.routers.traefik-public-http.rule=Host(`${TRAEFIK_DOMAIN?Variable not set}`)
- traefik.http.routers.traefik-public-http.entrypoints=http
- traefik.http.routers.traefik-public-http.middlewares=https-redirect
- traefik.http.routers.traefik-public-https.rule=Host(`${TRAEFIK_DOMAIN}`)
- traefik.http.routers.traefik-public-https.entrypoints=https
- traefik.http.routers.traefik-public-https.tls=true
- traefik.http.routers.traefik-public-https.service=api@internal
- traefik.http.routers.traefik-public-https.tls.certresolver=le
- traefik.http.routers.traefik-public-https.middlewares=admin-auth
- traefik.http.services.traefik-public.loadbalancer.server.port=8080
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- traefik-public-certificates:/certificates
command:
- --providers.docker
- --providers.docker.constraints=Label(`traefik.constraint-label`, `traefik-public`)
- --providers.docker.exposedbydefault=false
- --providers.docker.swarmmode
- --entrypoints.http.address=:80
- --entrypoints.https.address=:443
- --certificatesresolvers.le.acme.email=${EMAIL?Variable not set}
- --certificatesresolvers.le.acme.storage=/certificates/acme.json
- --certificatesresolvers.le.acme.tlschallenge=true
- --accesslog
- --log
- --api
networks:
- traefik-public
volumes:
traefik-public-certificates:
networks:
traefik-public:
name: traefik-public
attachable: true I use these labels: version: "3.7"
services:
frontend:
image: frappe/erpnext:${VERSION}
command:
- nginx-entrypoint.sh
deploy:
restart_policy:
condition: on-failure
labels:
- traefik.enable=true
- traefik.docker.network=traefik-public
- traefik.constraint-label=traefik-public
- traefik.http.middlewares.prod-redirect.redirectscheme.scheme=https
# Change router name prefix from erpnext to the name of stack in case of multi bench setup
- traefik.http.routers.${BENCH_NAME:-erpnext}-http.rule=Host(${SITES:?No sites set})
- traefik.http.routers.${BENCH_NAME:-erpnext}-http.entrypoints=http
# Remove following lines in case of local setup
- traefik.http.routers.${BENCH_NAME:-erpnext}-http.middlewares=prod-redirect
- traefik.http.routers.${BENCH_NAME:-erpnext}-https.rule=Host(${SITES})
- traefik.http.routers.${BENCH_NAME:-erpnext}-https.entrypoints=https
- traefik.http.routers.${BENCH_NAME:-erpnext}-https.tls=true
- traefik.http.routers.${BENCH_NAME:-erpnext}-https.tls.certresolver=le
# Remove above lines in case of local setup
# Uncomment and change domain for non-www to www redirect
# - traefik.http.routers.${BENCH_NAME:-erpnext}-https.middlewares=nonwwwtowww
# - traefik.http.middlewares.nonwwwtowww.redirectregex.regex=^https?://(?:www\.)?(.*)
# - traefik.http.middlewares.nonwwwtowww.redirectregex.replacement=https://www.$$1
- traefik.http.services.${BENCH_NAME:-erpnext}.loadbalancer.server.port=8080
environment:
BACKEND: backend:8000
FRAPPE_SITE_NAME_HEADER: $$host
SOCKETIO: websocket:9000
UPSTREAM_REAL_IP_ADDRESS: 127.0.0.1
UPSTREAM_REAL_IP_HEADER: X-Forwarded-For
UPSTREAM_REAL_IP_RECURSIVE: "off"
volumes:
- sites:/home/frappe/frappe-bench/sites |
If you need site backup done by frappe-bench then translate this to a scheduled task https://github.com/frappe/frappe_docker/blob/main/docs/backup-and-push-cronjob.md |
Extracting sitename from SERVICE_FQDN in all places makes things unnecessarily complicated.
Maybe we can add HR and Builder out of the box? (adds a lot of value) |
I think ideally we'd make the image configurable (with the current one being the default). This way everybody should be able to use their own set of apps. |
Can you help here @revant ? I think this getting merged will be quite nice for the ERPNext ecosystem! Let's get this done @barredterra ! |
@revant do you have an idea how we could set useful health checks for the websocket, queue workers and scheduler? |
Some observations from my experience using pwd.yml in Coolify:
Hope this feedback helps improve the setup process! 😊 |
Simple approach:
Complex approach:
|
Is this still being worked on or would I have to close this PR? |
Currently busy with other things, further work on this PR is likely delayed until end of January. |
I followed https://coolify.io/docs/contribute/service and modified https://github.com/frappe/frappe_docker/blob/main/pwd.yml to use coolify's generated variables.
Remaining issues:
the environment variable
SERVICE_PASSWORD_CREATESITE
doesn't automatically show up in the coolify UI (v4.0.0-beta.359), couldn't figure out why. After adding it manually, the deployment works.Edit, this works:
the environment variable
SERVICE_FQDN_FRONTEND
does not contain a FQDN, as the name would imply, but an URL (including the protocol). -> went back to hardcoded sitename to avoid extracting FQDN from URL in all placeshow do we handle upgrades?
Update this file from time to time oruse variables likeimage: ${ERPNEXT_IMAGE:-frappe/erpnext}:${ERPNEXT_VERSION_TAG:-latest}
?automaticallyrun migration patchesAdd health checks for all services:
Use coolify's proxy (Traefik)? https://coolify.io/docs/knowledge-base/docker/compose#labels
Write operator docs - any structured way available?
Note
I have little experience with coolify and docker, so all help is welcome.