Welcome to my IBM Cloud Private (Community Edition) on Linux Containers Infrastructure as a Code (IaaC). With the help of this IaaC, developers can easily setup a virtual multi-node ICP cluster on a single Linux Metal/VM!!!
This IaC not only takes away the pain of all manual configuration, but will also save valuable resources (nodes) by utilizing a single host machine to provide multi node ICP Kubernetes experience. It will install required CLIs, setup LXD, setup ICP-CE and some utility scripts.
As ICP is installed on LXD VMs, it can be easily installed and removed without any impact to host environment. Only LXD, CLIs and other desired/required packages will be installed on the host.
ICP 3.2.0 - Getting started
High Level Architecture
Supported Platforms
Topologies
View Install Configuration
Usage
Post Install
Screenshots
An example 4 node topology |
Host | Guest VM | ICP-CE | LXD | Min. Compute Power | User Privileges | Shell |
---|---|---|---|---|---|---|
Ubuntu 18.04 | Ubuntu 18.04 | 3.2.x/3.1.2 | 3.0.3 (apt) | 8Core 16GB-RAM 300GB-Disk | root | bash |
Boot (B) | Master/Etcd (ME) | Management (M) | Proxy (P) | Worker (W) |
---|---|---|---|---|
1 (B/ME/M/P) | 1+* | |||
1 (B/ME/M) | 1 | 1+* | ||
1 (B/ME/P) | 1 | 1+* | ||
1 (B/ME) | 1 | 1 | 1+* | |
*Set desired worker node count in install.properties before setting up cluster. | ||||
Supported topologies based on ICP Architecture | ||||
ICP Community Edition does not support HA. Master, Management and Proxy nodes count must always be 1 |
sudo su -
git clone https://github.com/HSBawa/icp-ce-on-linux-containers.git
cd icp-ce-on-linux-containers
For simplified setup, there is one single install.properites file, that will cover configuration for CLIs, LXD and ICP.
Examples:
# 3.1.2 or 3.2.0 or 3.2.1
ICP_TAG=3.2.0
# config.yaml.312.tmpl for 3.1.2 or config.yaml.320.tmpl for 3.2.x
ICP_CONFIG_YAML_TMPL_FILE=config.yaml.320.tmpl
## Use y to create separate Proxy, Management Nodes
PROXY_NODE=y
MGMT_NODE=y
## If for some reason public/external IP lookup fails or gets incorrect address,
## set lookup to 'n', manually provide IP addresses and then re-create cluster
ICP_AUTO_LOOKUP_HOST_IP_ADDRESS_AS_LB_ADDRESS=y
ICP_MASTER_LB_ADDRESS=none
ICP_PROXY_LB_ADDRESS=none
## Enable/Disable management services ####
ICP_MGMT_SVC_CUST_METRICS=enabled
ICP_MGMT_SVC_IMG_SEC_ENFORCE=enabled
ICP_MGMT_SVC_METERING=enabled
...
## Used for console/scripted login, provide your choice of username and password
## Default namespace will be added to auto-generated login helper script
## For extra security, random Username and Password auto generation based on patterns is supported.
## Auto generated username and/or password can be found in config.yaml or helper login script (keep them secure)
ICP_DEFAULT_NAMESPACE=default
ICP_DEFAULT_ADMIN_USER=admin
ICP_AUTO_GEN_RANDOM_ADMIN_USERNAME=n
ICP_AUTO_GEN_RANDOM_ADMIN_USERNAME_PATTERN=a-z
ICP_AUTO_GEN_RANDOM_ADMIN_USERNAME_LENGTH=10
ICP_DEFAULT_ADMIN_PASSWORD=xxxxxxx
ICP_AUTO_GEN_RANDOM_PASSWORD=y
## ICP Default password pattern of '^([a-zA-Z0-9\-]' with length 32 chars or more
ICP_PASSWORD_RULE_PATTERN=^([a-zA-Z0-9\-]{32,})$
ICP_AUTO_GEN_RANDOM_PASSWORD_LENGTH=35
ICP_AUTO_GEN_RANDOM_PASSWORD_PATTERN=a-zA-Z0-9-
Usage: sudo ./create_cluster.sh [options]
-es or --env-short : Environment name in short. ex: test, dev, demo etc.
-f or --force : [yY]|[yY][eE][sS] or n. Delete cluster LXD components from past install.
-h or --host : Provide host type information: pc (default), vsi, fyre, aws or othervm.
help : Print this usage.
Examples: sudo ./create_cluster.sh --host=fyre
sudo ./create_cluster.sh --host=fyre -f
sudo ./create_cluster.sh -es=demo --force --host=pc
Important Notes:
- v1.1.3 version of Terraform Provider for LXD may not work with recently released Terraform 0.12.x.
- It is imporant to use use right `host` parameter depending upon your host machine/vm.
- LXD cluster uses internal and private subnet. To expose this cluster, HAProxy is installed and configured by default to enable remote access.
- Recommended use of `static external IP`.
- If external IP gets changed after build, remote access to cluster will fail and thus will require a new build.
- This IaC is not tested with LXD installed via SNAP. I had so many issues using it, that I had to switch to APT based 3.0.3, which is considered as production stable
- During install, if you encounter error: "...Failed container creation: Create LXC container: LXD doesn't have a uid/gid allocation...", validate that the files '/etc/subgid' and '/etc/subuid' have content similar to shown below:
lxd:100000:65536
root:100000:65536
[username goes here]:165536:65536
- During install, if your build is stuck at the following message for greater than 10 mins: "....icp_ce_master: Still creating... ", perform the following steps:
* Cancel installation (Ctrl-C). May need more than one.
* Destroy cluster (./destroy_cluster.sh)
* Create cluster (./create_cluster.sh)
If you still see this issue next time, open a GIT issue, with as much possible details, and I can take look into it.
sudo ./download_icp_cloudctl_helm.sh
./icp-login-3.2.0-ce.sh
or
cloudctl login -a https://<internal_master_ip>:8443 -u <default_admin_user> -p <default_admin_user> -c id-devicpcluster-account -n default --skip-ssl-validation
or
cloudctl login -a https://<public_ip>:8443 -u <default_admin_user> -p <default_admin_user> -c id-devicpcluster-account -n default --skip-ssl-validation
sudo ./destroy-cluster.sh (Deletes lxd cluster w/ ICP-CE. Use with caution)