-
-
Notifications
You must be signed in to change notification settings - Fork 355
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error: Get "http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth" #143
Comments
outputs.eks_cluster_endpoint
returning localhost
in some cases
outputs.eks_cluster_endpoint
returning localhost
in some cases
@nitrocode Please look into why |
This is causing issues when attempting to delete EKS clusters as well.
|
This happens to me as well when running module version 0.45.0, but without any subnet changes. |
I also have module version 0.44.0 currently and I also am getting something similar when updating the module version then planning module version 0.45.0. terraform plan
|
I managed to kinda bypass the issue.
This way I have more control over the provider version, which I suspect is causing issues. |
There are various problems caused by the fact that we are calling the API of a resource that is being created or deleted at the same time. The official recommendation from Hashicorp is to break this module up into multiple modules, one to create the EKS cluster, one to create the Recommended workaroundsThis module provides 3 different authentication mechanisms to help work around the issues. We generally recommend using @nitrocode Providing a KUBECONFIG via |
Duplicate of #104 |
Found a bug? Maybe our Slack Community can help.
Describe the Bug
I noticed that if the eks cluster is switching subnets, particularly public + private, to only private, the eks cluster will return an endpoint of localhost (for
aws_eks_cluster.default[0].endpoint
).Related code
terraform-aws-eks-cluster/auth.tf
Lines 88 to 97 in b745ed1
Workaround 1
To get around this issue, I have to delete the kubeconfig map resource and then the module can be tricked to redeploying the eks cluster (due to the change in subnets).
or in atmos
if this workaround was done by mistake, you can re-import the deleted config
Workaround 2
I've also ran into this issue when importing an existing cluster into the terraform module. My workaround for the import is to do a
terraform init
and modify the downloaded moduleeks_cluster
'sauth.tf
to set thehost
arg of thekubernetes
provider to the dummy url.Proposal
Instead, it would be nice if we could either detect that the endpoint returns localhost and use something else that won't fail the kubeconfig, or disable the kubernetes provider completely when the endpoint is localhost.
The text was updated successfully, but these errors were encountered: