EKS cluster with private endpoint / vpc-cni ordering #24
-
When deploying a new cluster I wasn't able to get node groups to join the cluster initially. It seems this is because the cluster endpoint is private and the vpc-cni addon isn't deployed yet. By default we have What am I missing here? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 6 replies
-
Just last week I went through this same deployment. In my case, I also had the default configuration. Notably these: # eks/cluster
components:
terraform:
eks/cluster:
vars:
...
# private configuration
cluster_endpoint_private_access: true
cluster_endpoint_public_access: false
cluster_private_subnets_only: true
# addon configuration
addons_depends_on: true
cluster_kubernetes_version: "1.29"
# EKS addons
# https://docs.aws.amazon.com/eks/latest/userguide/eks-add-ons.html
# AWS recommends to provision the required EKS addons and not to rely on the managed addons
addons:
# https://docs.aws.amazon.com/eks/latest/userguide/cni-iam-role.html
# https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html
# https://docs.aws.amazon.com/eks/latest/userguide/cni-iam-role.html#cni-iam-role-create-role
# https://aws.github.io/aws-eks-best-practices/networking/vpc-cni/#deploy-vpc-cni-managed-add-on
vpc-cni:
addon_version: "v1.18.1-eksbuild.3" # set `addon_version` to `null` to use the latest version
# https://docs.aws.amazon.com/eks/latest/userguide/managing-kube-proxy.html
kube-proxy:
addon_version: "v1.29.3-eksbuild.2" # set `addon_version` to `null` to use the latest version
# https://docs.aws.amazon.com/eks/latest/userguide/managing-coredns.html
coredns:
addon_version: "v1.11.1-eksbuild.9"
## Enable autoscaling for coredns (requires AWS EKS build later than 2024-05-15)
configuration_values: '{"autoScaling":{"enabled":true}}'
# https://docs.aws.amazon.com/eks/latest/userguide/csi-iam-role.html
# https://aws.amazon.com/blogs/containers/amazon-ebs-csi-driver-is-now-generally-available-in-amazon-eks-add-ons
# https://docs.aws.amazon.com/eks/latest/userguide/managing-ebs-csi.html#csi-iam-role
# https://github.com/kubernetes-sigs/aws-ebs-csi-driver
aws-ebs-csi-driver:
addon_version: "v1.31.0-eksbuild.1" # set `addon_version` to `null` to use the latest version
# If you are not using [volume snapshots](https://kubernetes.io/blog/2020/12/10/kubernetes-1.20-volume-snapshot-moves-to-ga/#how-to-use-volume-snapshots)
# (and you probably are not), disable the EBS Snapshotter with:
configuration_values: '{"sidecars":{"snapshotter":{"forceEnable":false}}}'
aws-efs-csi-driver:
addon_version: "v2.0.5-eksbuild.1"
resolve_conflicts: OVERWRITE I also had trouble getting the nodes to the join the cluster. In my case, it was because we didn't have any way for the nodes to egress the private subnets. I had to enable a NAT gateway and route It may not be the same for your use-case. I would recommend checking the
@rauthur Do you have egress from your private subnets? Does |
Beta Was this translation helpful? Give feedback.
Looks like I had deployed block-all behavior to the Route53 DNS resolver firewall ONLY in our
dev
stage. Whoops.