Skip to content
This repository has been archived by the owner on Jan 31, 2023. It is now read-only.

Unable to login to openunison #32

Open
bbellrose1 opened this issue Feb 19, 2021 · 58 comments
Open

Unable to login to openunison #32

bbellrose1 opened this issue Feb 19, 2021 · 58 comments

Comments

@bbellrose1
Copy link

I followed the instruction for deploying this using help. I notice that it does not create a pod for the orchestra, but a kind of "OpenUnison". There is no pod, no service, no ingress. How can I expose this without a service?

I am using kubernetes 1.20 with Kong as my Ingress Controller. I am using kong with nodeport. So any service would have to be a NodePort service:

$ kubectl get all -n openunison
NAME READY STATUS RESTARTS AGE
pod/openunison-operator-6b95dc6574-v2t7p 1/1 Running 0 2d

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/openunison-operator 1/1 1 1 2d

NAME DESIRED CURRENT READY AGE
replicaset.apps/openunison-operator-6b95dc6574 1 1 1 2d

kubectl get openunison orchestra -n openunison
NAME AGE
orchestra 4h34m

For the values.yaml used with HELM. Would I need to set my openunison_host to be my Cluster IP:NodePort? How do I know what I should set this URL to normally I hit my host with the generic VIP I set up and then the NodePort.

Brian

@dkulchinsky
Copy link
Contributor

you should check the openunison-operator logs, it is responsible for intercepting the OpenUnison custom resource and spin up the appropriate Pods, Services, Ingress, etc...

AFAIK, openunison-operator is designed to work with nginx ingress controller, but perhaps there's a way to get some customization.

@bbellrose1
Copy link
Author

you should check the openunison-operator logs, it is responsible for intercepting the OpenUnison custom resource and spin up the appropriate Pods, Services, Ingress, etc...

AFAIK, openunison-operator is designed to work with nginx ingress controller, but perhaps there's a way to get some customization.

Ok, thanks. Yes I see in the logs for the operator an error about creating secrets. I will look into that. Regarding NGinx for ingress. I did see that. Someone from Tremulo indicated that other ingresses should be reported in the future and that using Kong should not prevent this from working. I am not sure why the ingress would matter. Ingress just handles coming into the cluster. Openunison should only be deploying into the cluster. Any ingress should be fine as long as it points to the correct backend service.

Brian

@bbellrose1
Copy link
Author

you should check the openunison-operator logs, it is responsible for intercepting the OpenUnison custom resource and spin up the appropriate Pods, Services, Ingress, etc...

AFAIK, openunison-operator is designed to work with nginx ingress controller, but perhaps there's a way to get some customization.

Here is what I see in the operator log. Not that it states to "check the logs". Not sure what logs it means:

Done invoking javascript
Checking if need to create a status for : 'MODIFIED'
Generating status
Creating status patch : {"digest":"HbzY1jW79QuOV8/lrdyFgEBAsVQplkxCMVNlJoQ9f3Y=","conditions":{"reason":"Unable to generate secrets, please check the logs","lastTransitionTime":"2021-02-19 02:13:23GMT","type":"Failed","status":"True"}}
Patching to '/apis/openunison.tremolo.io/v2/namespaces/openunison/openunisons/orchestra/status'
Patch : '{"status":{"digest":"HbzY1jW79QuOV8/lrdyFgEBAsVQplkxCMVNlJoQ9f3Y=","conditions":{"reason":"Unable to generate secrets, please check the logs","lastTransitionTime":"2021-02-19 02:13:23GMT","type":"Failed","status":"True"}}}'

@mlbiam
Copy link
Contributor

mlbiam commented Feb 19, 2021

I am using kubernetes 1.20 with Kong as my Ingress Controller. I am using kong with nodeport. So any service would have to be a NodePort service:

I don't know that openunison would need to run on a NodePort. The Ingress object the operator generates automatically is designed to work with nginx Ingress controller, but that's primarily about the annotations. I haven't worked with Kong yet, but there are a couple of approaches you can take to get it running:

  1. Tell the operator it's Nginx (even though it isn't) and add the annotations needed for Kong to work with it
  2. Skip Ingress generation and generate the correct CR objects Kong uses on it's own

For 1, the key is to have the annotation kubernetes.io/ingress.class set to the right value for Kong to pick it up. Additionally you need to tell Kong:

  1. To communicate via HTTPS to the service
  2. Enable cookie based session stickyness

This can be done in the helm chart by adding the annotations to network.ingress_annotations in the values.yaml like:

network:
  .
  ingress_type: nginx
  ingress_annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: "letsencrypt"

For option 2, you can set network.ingress_type to none and the Ingress object won't be created for you by the operator. Then you'll need to configure an Ingress for Kong manually that meets the above requirements (HTTPS to the openunison-orchestra service, cookie based sticky sessions).

For the values.yaml used with HELM. Would I need to set my openunison_host to be my Cluster IP:NodePort?

Your network.openunison_host should be the host name you want users to use to access the OpenUnison portal. The same is true for network.dashboard_host. These values must be different host names.

Creating status patch : {"digest":"HbzY1jW79QuOV8/lrdyFgEBAsVQplkxCMVNlJoQ9f3Y=","conditions":{"reason":"Unable to generate secrets, please check the logs","lastTransitionTime":"2021-02-19 02:13:23GMT","type":"Failed","status":"True"}}

There should be an error earlier in the logs? Have you deployed the k8s dashboard yet?

@bbellrose1
Copy link
Author

I am using kubernetes 1.20 with Kong as my Ingress Controller. I am using kong with nodeport. So any service would have to be a NodePort service:

I don't know that openunison would need to run on a NodePort. The Ingress object the operator generates automatically is designed to work with nginx Ingress controller, but that's primarily about the annotations. I haven't worked with Kong yet, but there are a couple of approaches you can take to get it running:

1. Tell the operator it's Nginx (even though it isn't) and add the annotations needed for Kong to work with it

2. Skip Ingress generation and generate the correct CR objects Kong uses on it's own

For 1, the key is to have the annotation kubernetes.io/ingress.class set to the right value for Kong to pick it up. Additionally you need to tell Kong:

1. To communicate via HTTPS to the service

2. Enable cookie based session stickyness

This can be done in the helm chart by adding the annotations to network.ingress_annotations in the values.yaml like:

network:
  .
  ingress_type: nginx
  ingress_annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: "letsencrypt"

For option 2, you can set network.ingress_type to none and the Ingress object won't be created for you by the operator. Then you'll need to configure an Ingress for Kong manually that meets the above requirements (HTTPS to the openunison-orchestra service, cookie based sticky sessions).

For the values.yaml used with HELM. Would I need to set my openunison_host to be my Cluster IP:NodePort?

Your network.openunison_host should be the host name you want users to use to access the OpenUnison portal. The same is true for network.dashboard_host. These values must be different host names.

Creating status patch : {"digest":"HbzY1jW79QuOV8/lrdyFgEBAsVQplkxCMVNlJoQ9f3Y=","conditions":{"reason":"Unable to generate secrets, please check the logs","lastTransitionTime":"2021-02-19 02:13:23GMT","type":"Failed","status":"True"}}

There should be an error earlier in the logs? Have you deployed the k8s dashboard yet?

I will look at suggestions for the HELM. Also, Yes, the dashboard is installed an I can connect. I am currently using the same VIP (which points to each of my controplane nodes) for Dashboard and what would have been openunison and just separating by NodePort. Apparently that is not going to work and I should set up a distinct entry in my F5 for each.

Brian

@mlbiam
Copy link
Contributor

mlbiam commented Feb 19, 2021

Yes, the dashboard is installed an I can connect. I am currently using the same VIP (which points to each of my controplane nodes) for Dashboard and what would have been openunison and just separating by NodePort. Apparently that is not going to work and I should set up a distinct entry in my F5 for each.

openunison runs two "logical" applications (akin to virtual hosts) that both run in the same container. You woudln't need (not want) to run the dashboard as a public Ingress or NodePort. For dashboard it's User --> LB --> Ingress --> OpenUnison --> Dashboard.

@bbellrose1
Copy link
Author

Yes, the dashboard is installed an I can connect. I am currently using the same VIP (which points to each of my controplane nodes) for Dashboard and what would have been openunison and just separating by NodePort. Apparently that is not going to work and I should set up a distinct entry in my F5 for each.

openunison runs two "logical" applications (akin to virtual hosts) that both run in the same container. You woudln't need (not want) to run the dashboard as a public Ingress or NodePort. For dashboard it's User --> LB --> Ingress --> OpenUnison --> Dashboard.

I have pod security policies in place. Is it possible that is what is preventing this from deploying correctly? Do I need to give the openunison-operator service account elevated privileges?

@bbellrose1
Copy link
Author

I am using kubernetes 1.20 with Kong as my Ingress Controller. I am using kong with nodeport. So any service would have to be a NodePort service:

I don't know that openunison would need to run on a NodePort. The Ingress object the operator generates automatically is designed to work with nginx Ingress controller, but that's primarily about the annotations. I haven't worked with Kong yet, but there are a couple of approaches you can take to get it running:

1. Tell the operator it's Nginx (even though it isn't) and add the annotations needed for Kong to work with it

2. Skip Ingress generation and generate the correct CR objects Kong uses on it's own

For 1, the key is to have the annotation kubernetes.io/ingress.class set to the right value for Kong to pick it up. Additionally you need to tell Kong:

1. To communicate via HTTPS to the service

2. Enable cookie based session stickyness

This can be done in the helm chart by adding the annotations to network.ingress_annotations in the values.yaml like:

network:
  .
  ingress_type: nginx
  ingress_annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: "letsencrypt"

For option 2, you can set network.ingress_type to none and the Ingress object won't be created for you by the operator. Then you'll need to configure an Ingress for Kong manually that meets the above requirements (HTTPS to the openunison-orchestra service, cookie based sticky sessions).

For the values.yaml used with HELM. Would I need to set my openunison_host to be my Cluster IP:NodePort?

Your network.openunison_host should be the host name you want users to use to access the OpenUnison portal. The same is true for network.dashboard_host. These values must be different host names.

Creating status patch : {"digest":"HbzY1jW79QuOV8/lrdyFgEBAsVQplkxCMVNlJoQ9f3Y=","conditions":{"reason":"Unable to generate secrets, please check the logs","lastTransitionTime":"2021-02-19 02:13:23GMT","type":"Failed","status":"True"}}

There should be an error earlier in the logs? Have you deployed the k8s dashboard yet?

I will look at suggestions for the HELM. Also, Yes, the dashboard is installed an I can connect. I am currently using the same VIP (which points to each of my controplane nodes) for Dashboard and what would have been openunison and just separating by NodePort. Apparently that is not going to work and I should set up a distinct entry in my F5 for each.

Brian

I tried updating the envionment so that the Dashboard pointed to the Cluster IP of the dashboard service. I believe it was stated that openunison could access the dashboard from pod to pod rather than using the external address. However I am still seeing issues:

Invoking javascript
in js : {"type":"ADDED","object":{"metadata":{"generation":1,"uid":"d387b98c-82f9-4a39-8eba-1c9e9d800e20","managedFields":[{"apiVersion":"openunison.tremolo.io/v2","fieldsV1":{"f:metadata":{"f:annotations":{"f:meta.helm.sh/release-namespace":{},"f:meta.helm.sh/release-name":{},".":{}},"f:labels":{".":{},"f:app.kubernetes.io/managed-by":{}}},"f:spec":{"f:key_store":{"f:static_keys":{},"f:update_controller":{"f:days_to_expire":{},"f:schedule":{},"f:image":{},".":{}},"f:trusted_certificates":{},".":{},"f:key_pairs":{"f:create_keypair_template":{},"f:keys":{},".":{}}},"f:source_secret":{},"f:hosts":{},"f:enable_activemq":{},"f:deployment_data":{"f:node_selectors":{},"f:tokenrequest_api":{"f:audience":{},"f:enabled":{},"f:expirationSeconds":{},".":{}},"f:readiness_probe_command":{},"f:pull_secret":{},"f:liveness_probe_command":{},".":{}},"f:image":{},"f:non_secret_data":{},"f:dest_secret":{},"f:openunison_network_configuration":{"f:activemq_dir":{},"f:quartz_dir":{},"f:secure_key_alias":{},"f:path_to_env_file":{},"f:secure_port":{},"f:ciphers":{},".":{},"f:allowed_client_names":{},"f:open_port":{},"f:path_to_deployment":{},"f:client_auth":{},"f:open_external_port":{},"f:force_to_secure":{},"f:secure_external_port":{}},"f:secret_data":{},".":{},"f:replicas":{}}},"manager":"Go-http-client","time":"2021-02-19T21:46:10Z","operation":"Update","fieldsType":"FieldsV1"}],"resourceVersion":"9504500","creationTimestamp":"2021-02-19T21:46:10Z","name":"orchestra","namespace":"openunison","annotations":{"meta.helm.sh/release-name":"orchestra","meta.helm.sh/release-namespace":"openunison"},"labels":{"app.kubernetes.io/managed-by":"Helm"}},"apiVersion":"openunison.tremolo.io/v2","kind":"OpenUnison","spec":{"image":"docker.io/tremolosecurity/openunison-k8s-login-oidc:latest","source_secret":"orchestra-secrets-source","openunison_network_configuration":{"client_auth":"none","secure_external_port":443,"activemq_dir":"/tmp/amq","force_to_secure":true,"quartz_dir":"/tmp/quartz","allowed_client_names":[],"secure_port":8443,"secure_key_alias":"unison-tls","open_external_port":80,"path_to_deployment":"/usr/local/openunison/work","ciphers":["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384","TLS_RSA_WITH_AES_256_GCM_SHA384","TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384","TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384","TLS_DHE_RSA_WITH_AES_256_GCM_SHA384"],"path_to_env_file":"/etc/openunison/ou.env","open_port":8080},"hosts":[{"secret_name":"ou-tls-certificate","ingress_name":"openunison","names":[{"env_var":"OU_HOST","name":"#######"},{"env_var":"K8S_DASHBOARD_HOST","name":"10.109.116.58"}],"annotations":[{"name":"kubernetes.io/ingress.class","value":"kong"}],"ingress_type":"nginx"}],"replicas":1,"non_secret_data":[{"name":"K8S_URL","value":"https://10.220.5.125:6443"},{"name":"SESSION_INACTIVITY_TIMEOUT_SECONDS","value":"900"},{"name":"MYVD_CONFIG_PATH","value":"WEB-INF/myvd.conf"},{"name":"K8S_DASHBOARD_NAMESPACE","value":"kubernetes-dashboard"},{"name":"K8S_DASHBOARD_SERVICE","value":"kubernetes-dashboard"},{"name":"K8S_CLUSTER_NAME","value":"nalk8s"},{"name":"K8S_IMPERSONATION","value":"false"},{"name":"PROMETHEUS_SERVICE_ACCOUNT","value":"system:serviceaccount:monitoring:prometheus-k8s"},{"name":"OIDC_CLIENT_ID","value":"0oawsalojzrbfk52Y0h7"},{"name":"OIDC_IDP_AUTH_URL","value":"https:///oauth2/1/authorize"},{"name":"OIDC_IDP_TOKEN_URL","value":"https:///v1/token"},{"name":"OIDC_IDP_LIMIT_DOMAIN","value":""},{"name":"SUB_CLAIM","value":"sub"},{"name":"EMAIL_CLAIM","value":"email"},{"name":"GIVEN_NAME_CLAIM","value":"given_name"},{"name":"FAMILY_NAME_CLAIM","value":"family_name"},{"name":"DISPLAY_NAME_CLAIM","value":"name"},{"name":"GROUPS_CLAIM","value":"groups"},{"name":"OIDC_USER_IN_IDTOKEN","value":"false"},{"name":"OIDC_IDP_USER_URL","value":"https:///v1/userinfo"},{"name":"OIDC_SCOPES","value":"openid email profile groups"}],"deployment_data":{"node_selectors":[],"tokenrequest_api":{"audience":"api","expirationSeconds":600,"enabled":true},"liveness_probe_command":["/usr/local/openunison/bin/check_alive.py"],"readiness_probe_command":["/usr/local/openunison/bin/check_alive.py","https://127.0.0.1:8443/auth/idp/k8sIdp/.well-known/openid-configuration","issuer"],"pull_secret":""},"key_store":{"trusted_certificates":[{"pem_data":"SDFGSDFGHDFHSDFGSDGSDFGDS","name":"idp"}],"static_keys":[{"name":"session-unison","version":1},{"name":"lastmile-oidc","version":1}],"update_controller":{"days_to_expire":10,"image":"docker.io/tremolosecurity/kubernetes-artifact-deployment:1.1.0","schedule":"0 2 * * *"},"key_pairs":{"create_keypair_template":[{"name":"ou","value":"TLIT"},{"name":"o","value":"Railcar Management, LLC."},{"name":"l","value":"New Albany"},{"name":"st","value":"Ohio"},{"name":"c","value":"US"}],"keys":[{"create_data":{"server_name":"openunison-orchestra.openunison.svc","subject_alternative_names":[],"ca_cert":true,"sign_by_k8s_ca":false,"key_size":2048},"name":"unison-tls","import_into_ks":"keypair"},{"create_data":{"server_name":"kubernetes-dashboard.kubernetes-dashboard.svc","subject_alternative_names":[],"secret_info":{"key_name":"dashboard.key","cert_name":"dashboard.crt","type_of_secret":"Opaque"},"ca_cert":true,"delete_pods_labels":["k8s-app=kubernetes-dashboard"],"sign_by_k8s_ca":false,"key_size":2048,"target_namespace":"kubernetes-dashboard"},"replace_if_exists":true,"name":"kubernetes-dashboard","tls_secret_name":"kubernetes-dashboard-certs","import_into_ks":"certificate"},{"create_data":{"server_name":"unison-saml2-rp-sig","subject_alternative_names":[],"ca_cert":true,"sign_by_k8s_ca":false,"key_size":2048},"name":"unison-saml2-rp-sig","import_into_ks":"keypair"}]}},"enable_activemq":false,"dest_secret":"orchestra","secret_data":["K8S_DB_SECRET","unisonKeystorePassword","OIDC_CLIENT_SECRET"]}}}
Getting host variable names
Host #0
Name #0
OU_HOST
########
Name #1
K8S_DASHBOARD_HOST
10.109.116.58
Done invoking javascript
Checking if need to create a status for : 'ADDED'
Generating status
Creating status patch : {"digest":"goyp4KncEm4ikJNsElOVw+rGyDRapWjyo91bgGSCDwE=","conditions":{"reason":"Unable to generate secrets, please check the logs","lastTransitionTime":"2021-02-19 09:46:10GMT","type":"Failed","status":"True"}}
Patching to '/apis/openunison.tremolo.io/v2/namespaces/openunison/openunisons/orchestra/status'
Patch : '{"status":{"digest":"goyp4KncEm4ikJNsElOVw+rGyDRapWjyo91bgGSCDwE=","conditions":{"reason":"Unable to generate secrets, please check the logs","lastTransitionTime":"2021-02-19 09:46:10GMT","type":"Failed","status":"True"}}}'
{code=200, data={"apiVersion":"openunison.tremolo.io/v2","kind":"OpenUnison","metadata":{"annotations":{"meta.helm.sh/release-name":"orchestra","meta.helm.sh/release-namespace":"openunison"},"creationTimestamp":"2021-02-19T21:46:10Z","generation":1,"labels":{"app.kubernetes.io/managed-by":"Helm"},"managedFields":[{"apiVersion":"openunison.tremolo.io/v2","fieldsType":"FieldsV1","fieldsV1":{"f:status":{".":{},"f:conditions":{".":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"f:digest":{}}},"manager":"Apache-HttpClient","operation":"Update","time":"2021-02-19T21:46:10Z"},{"apiVersion":"openunison.tremolo.io/v2","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app.kubernetes.io/managed-by":{}}},"f:spec":{".":{},"f:deployment_data":{".":{},"f:liveness_probe_command":{},"f:node_selectors":{},"f:pull_secret":{},"f:readiness_probe_command":{},"f:tokenrequest_api":{".":{},"f:audience":{},"f:enabled":{},"f:expirationSeconds":{}}},"f:dest_secret":{},"f:enable_activemq":{},"f:hosts":{},"f:image":{},"f:key_store":{".":{},"f:key_pairs":{".":{},"f:create_keypair_template":{},"f:keys":{}},"f:static_keys":{},"f:trusted_certificates":{},"f:update_controller":{".":{},"f:days_to_expire":{},"f:image":{},"f:schedule":{}}},"f:non_secret_data":{},"f:openunison_network_configuration":{".":{},"f:activemq_dir":{},"f:allowed_client_names":{},"f:ciphers":{},"f:client_auth":{},"f:force_to_secure":{},"f:open_external_port":{},"f:open_port":{},"f:path_to_deployment":{},"f:path_to_env_file":{},"f:quartz_dir":{},"f:secure_external_port":{},"f:secure_key_alias":{},"f:secure_port":{}},"f:replicas":{},"f:secret_data":{},"f:source_secret":{}}},"manager":"Go-http-client","operation":"Update","time":"2021-02-19T21:46:10Z"}],"name":"orchestra","namespace":"openunison","resourceVersion":"9504502","uid":"d387b98c-82f9-4a39-8eba-1c9e9d800e20"},"spec":{"deployment_data":{"liveness_probe_command":["/usr/local/openunison/bin/check_alive.py"],"node_selectors":[],"pull_secret":"","readiness_probe_command":["/usr/local/openunison/bin/check_alive.py","https://127.0.0.1:8443/auth/idp/k8sIdp/.well-known/openid-configuration","issuer"],"tokenrequest_api":{"audience":"api","enabled":true,"expirationSeconds":600}},"dest_secret":"orchestra","enable_activemq":false,"hosts":[{"annotations":[{"name":"kubernetes.io/ingress.class","value":"kong"}],"ingress_name":"openunison","ingress_type":"nginx","names":[{"env_var":"OU_HOST","name":"#######"},{"env_var":"K8S_DASHBOARD_HOST","name":"10.109.116.58"}],"secret_name":"ou-tls-certificate"}],"image":"docker.io/tremolosecurity/openunison-k8s-login-oidc:latest","key_store":{"key_pairs":{"create_keypair_template":[{"name":"ou","value":"TLIT"},{"name":"o","value":"Railcar Management, LLC."},{"name":"l","value":"New Albany"},{"name":"st","value":"Ohio"},{"name":"c","value":"US"}],"keys":[{"create_data":{"ca_cert":true,"key_size":2048,"server_name":"openunison-orchestra.openunison.svc","sign_by_k8s_ca":false,"subject_alternative_names":[]},"import_into_ks":"keypair","name":"unison-tls"},{"create_data":{"ca_cert":true,"delete_pods_labels":["k8s-app=kubernetes-dashboard"],"key_size":2048,"secret_info":{"cert_name":"dashboard.crt","key_name":"dashboard.key","type_of_secret":"Opaque"},"server_name":"kubernetes-dashboard.kubernetes-dashboard.svc","sign_by_k8s_ca":false,"subject_alternative_names":[],"target_namespace":"kubernetes-dashboard"},"import_into_ks":"certificate","name":"kubernetes-dashboard","replace_if_exists":true,"tls_secret_name":"kubernetes-dashboard-certs"},{"create_data":{"ca_cert":true,"key_size":2048,"server_name":"unison-saml2-rp-sig","sign_by_k8s_ca":false,"subject_alternative_names":[]},"import_into_ks":"keypair","name":"unison-saml2-rp-sig"}]},"static_keys":[{"name":"session-unison","version":1},{"name":"lastmile-oidc","version":1}],"trusted_certificates":[{"name":"idp","pem_data":"SDFGSDFGHDFHSDFGSDGSDFGDS"}],"update_controller":{"days_to_expire":10,"image":"docker.io/tremolosecurity/kubernetes-artifact-deployment:1.1.0","schedule":"0 2 * * *"}},"non_secret_data":[{"name":"K8S_URL","value":"https://10.220.5.125:6443"},{"name":"SESSION_INACTIVITY_TIMEOUT_SECONDS","value":"900"},{"name":"MYVD_CONFIG_PATH","value":"WEB-INF/myvd.conf"},{"name":"K8S_DASHBOARD_NAMESPACE","value":"kubernetes-dashboard"},{"name":"K8S_DASHBOARD_SERVICE","value":"kubernetes-dashboard"},{"name":"K8S_CLUSTER_NAME","value":"nalk8s"},{"name":"K8S_IMPERSONATION","value":"false"},{"name":"PROMETHEUS_SERVICE_ACCOUNT","value":"system:serviceaccount:monitoring:prometheus-k8s"},{"name":"GIVEN_NAME_CLAIM","value":"given_name"},{"name":"FAMILY_NAME_CLAIM","value":"family_name"},{"name":"DISPLAY_NAME_CLAIM","value":"name"},{"name":"GROUPS_CLAIM","value":"groups"},/v1/userinfo"},{"name":"OIDC_SCOPES","value":"openid email profile groups"}],"openunison_network_configuration":{"activemq_dir":"/tmp/amq","allowed_client_names":[],"ciphers":["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384","TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384","TLS_RSA_WITH_AES_256_GCM_SHA384","TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384","TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384","TLS_DHE_RSA_WITH_AES_256_GCM_SHA384"],"client_auth":"none","force_to_secure":true,"open_external_port":80,"open_port":8080,"path_to_deployment":"/usr/local/openunison/work","path_to_env_file":"/etc/openunison/ou.env","quartz_dir":"/tmp/quartz","secure_external_port":443,"secure_key_alias":"unison-tls","secure_port":8443},"replicas":1,"secret_data":["K8S_DB_SECRET","unisonKeystorePassword","OIDC_CLIENT_SECRET"],"source_secret":"orchestra-secrets-source"},"status":{"conditions":{"lastTransitionTime":"2021-02-19 09:46:10GMT","status":"True","type":"Failed"},"digest":"goyp4KncEm4ikJNsElOVw+rGyDRapWjyo91bgGSCDwE="}}
}

@mlbiam
Copy link
Contributor

mlbiam commented Feb 24, 2021

I tried updating the envionment so that the Dashboard pointed to the Cluster IP of the dashboard service. I believe it was stated that openunison could access the dashboard from pod to pod rather than using the external address

Correct, there's no need to have an external IP for the dashboard. OpenUnison talks to it via the service. Just to double check, the dashboard is version 2.x running in the kubernetes-dashboard namespace, right?

Also, did you create the orchestra-secrets-source? It's odd because the operator is processing your hosts then dying.

@bbellrose1
Copy link
Author

Yes dashboard runs in that namespace:
kubectl get all -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
pod/dashboard-metrics-scraper-745cf9f488-68gxg 1/1 Running 0 5d14h
pod/kubernetes-dashboard-7448ffc97b-g2lvg 1/1 Running 0 14d

Pod Template:
Labels: k8s-app=kubernetes-dashboard
Service Account: kubernetes-dashboard
Containers:
kubernetes-dashboard:
Image: kubernetesui/dashboard:v2.1.0

I created the orchestra-secrets-source per the documentation. Opaque secret with base 64 encoded values.

Brian

@mlbiam
Copy link
Contributor

mlbiam commented Feb 24, 2021

i'm noticing you have an IP for your K8S_DASHBOARD_HOST. Is the same true of OU_HOST? We generate self signed certs based with SANs from these values and I know that will fail if it's an IP. Should be a better error message though. This cert is for "getting started" for the Ingress controller and can be skipped if your controller has a common self-signed cert or you're using something like cert-manager.

Can you use host names for these instead of IPs? Generally the host names will point to the load balancer, which in turn is linked to the Ingress controller which routes the request based on your Ingress object (or ingress specific CRDs).

@bbellrose1
Copy link
Author

I originally was using my CN (alias for my cluster), however that was not working since I use nodePort for my ingress. So I should use the service name for the Dashboard?

When you say Load balancer, are you talking about the internal one? I have calico which does load balancing. I also have my kong proxy.

@mlbiam
Copy link
Contributor

mlbiam commented Feb 24, 2021

this is a common question. let me draw something out to see if it helps since the terms can be interpreted different ways depending on what layer of the infrastructure you're talking about.

@mlbiam
Copy link
Contributor

mlbiam commented Feb 24, 2021

Does this do a better job of explaining the network flow and how the pieces fit? This is generic (though based on a simple model such as Nginx Ingress). I've not deployed Kong so I'm not sure how it would differ but most Ingress controllers work along the same lines. Both DNS entries for OU_HOST and K8S_DASHBOARD_HOST have IPs/CNAMEs that point to your load ballancer. Your load ballancer sends all traffic to the Ingress controller (Kong/Nginx/Istio/etc) based on its own configuration. The Ingress controller routes traffic for both host names to the OpenUnison pods. OpenUnison then routes dashboard traffic directly to the dashboard Service (with the appropriate identity information included).

openunison_k8s_network

I think the issue you are seeing right now is because your K8S_DB_HOST is set to an IP. Both K8S_DB_HOST should be fully qualified DNS host names that point to your load ballancer.

@bbellrose1
Copy link
Author

Ok, so your assuming there is a load ballancer in front of Kubernetes. I have F5, but it does not have the Kubernetes load ballancer option. So as I stated earlier I am using node port and allowing kube-api and calico to do the load balancing to pods.

Sounds like I need to create some more F5 entries and mask the nodePort for the objects.

Of course for your example you have k8sdb.domain.com for your dashboard, but then you state: " OpenUnison then routes dashboard traffic directly to the dashboard Service", but apparently that is not true since it appears you need a CName entry in the load ballancer for the dashboard too.

So sounds like I need to do the following:

Two separate F5 entries for Dashboard and OpenUnison that would point to my nodePorts managed by Kong ingress proxy. Sound right?

Brian

@mlbiam
Copy link
Contributor

mlbiam commented Feb 24, 2021

Ok, so your assuming there is a load ballancer in front of Kubernetes.

yes

Of course for your example you have k8sdb.domain.com for your dashboard, but then you state: " OpenUnison then routes dashboard traffic directly to the dashboard Service", but apparently that is not true since it appears you need a CName entry in the load ballancer for the dashboard too.

you need the additional CNAME entry in the load balancer so OpenUnison knows how to route traffic. We do so based on the host name of the URL (similar to how the Ingress controller routes requests based on host name to Service objects). Based on the host in the URL, we're able to determine where to forward traffic (either to our own internal application or to the dashboard's service)

Two separate F5 entries for Dashboard and OpenUnison that would point to my nodePorts managed by Kong ingress proxy. Sound right?

There would be a single nodePort (as OpenUnison has just one Service)

@bbellrose1
Copy link
Author

bbellrose1 commented Feb 25, 2021

I tried to redeploy with modifications to the values.yaml for those URLs. Still getting errors during creation:

Getting host variable names
Host #0
Name #0
OU_HOST
Creating openunison keystore
Storing k8s certificate
Storing trusted certificates
Error on watch - {"type":"ADDED","object":{"metadata":{"generation":1,"uid":"1ee42931-c7d8-4f87-8e40-java.lang.IllegalArgumentException: Last unit does not have enough valid bits

@mlbiam
Copy link
Contributor

mlbiam commented Feb 25, 2021

Your hosts shouldn't include the protocol

@bbellrose1
Copy link
Author

Your hosts shouldn't include the protocol

Removed https from the hosts, but still seeing: java.lang.IllegalArgumentException: Last unit does not have enough valid bits

@mlbiam
Copy link
Contributor

mlbiam commented Feb 25, 2021

What happens if you remove the ports? (I know you want to run on nodeport, just curious if it will run)

@bbellrose1
Copy link
Author

Same error during creation after removing ports as well (java.lang.IllegalArgumentException: Last unit does not have enough valid bits)

Brian

@mlbiam
Copy link
Contributor

mlbiam commented Feb 25, 2021

Hmm. Something in your input is breaking cert generation. Will add better error handling and reporting to the operator to give you better feedback.

@bbellrose1
Copy link
Author

Hmm. Something in your input is breaking cert generation. Will add better error handling and reporting to the operator to give you better feedback.

Could it be this part of the values.yaml that is used during the helm install?

openunison:
replicas: 1
non_secret_data: {}
secrets: []

Should that secrets point to the kubernetes secret I created? I tried adding it in like:

openunison:
replicas: 1
non_secret_data: {}
secrets: orchestra-secrets-source

, but I received an error with that as well:

$ helm install orchestra tremolo/openunison-k8s-login-oidc --namespace openunison -f ./values.yaml
Error: template: openunison-k8s-login-oidc/templates/openunison.yaml:281:20: executing "openunison-k8s-login-oidc/templates/openunison.yaml" at <.Values.openunison.secrets>: range can't iterate over orchestra-secrets-source

@bbellrose1
Copy link
Author

Seems like the issue might have been related to the trusted_certs section. I did not originally change that from the example values.yaml provided by openunison. I just attempted to use the name and PEM value from my OIDC and when trying to create after that I get the following error:

java.security.cert.CertificateException: java.io.IOException: Invalid BER/DER data (too huge?)

@mlbiam
Copy link
Contributor

mlbiam commented Mar 1, 2021

so your saying your values.yaml still has:

trusted_certs:
  - name: idp
    pem_b64: SDFGSDFGHDFHSDFGSDGSDFGDS

?

@bbellrose1
Copy link
Author

It did, until I just tried with the PEM values from the cert of my OIDC. However when I did that I got that too huge error.

@mlbiam
Copy link
Contributor

mlbiam commented Mar 1, 2021

is your oidc identity provider a public provider? (ie okta)?

@bbellrose1
Copy link
Author

Yes, our company has set up okta.

@mlbiam
Copy link
Contributor

mlbiam commented Mar 1, 2021

Great, you don't need to set the cert

If you don't need a certificate to talk to your identity provider, replace the trusted_certs section with trusted_certs: [].

Once you replace that section with an empty list hopefully we'll get past the cert generation error

@bbellrose1
Copy link
Author

Ok, I will try it

@bbellrose1
Copy link
Author

Ok, yes that seemed to work... Now orchestrator gets an error :)

[2021-03-01 16:49:00,152][main] INFO OpenShiftTarget - Token: '***************************'
com.tremolosecurity.provisioning.core.ProvisioningException: Could not load token

Caused by: java.nio.file.NoSuchFileException: /var/run/secrets/kubernetes.io/serviceaccount/token

@mlbiam
Copy link
Contributor

mlbiam commented Mar 1, 2021

Odd, in your values.yaml, what's your services:enable_tokenrequest?

@bbellrose1
Copy link
Author

services:
enable_tokenrequest: true
token_request_audience: api
token_request_expiration_seconds: 600
node_selectors: []
pullSecret: ""

@mlbiam
Copy link
Contributor

mlbiam commented Mar 1, 2021

does your cluster support the TokenRequest API?

@mlbiam
Copy link
Contributor

mlbiam commented Mar 1, 2021

I double checked the code, we're not making tokenrequest available until 1.0.21 (next release), this week. For now services.enable_tokenrequest must be false.

@bbellrose1
Copy link
Author

Ok, thanks. That helped... I am seeing the pod for the orchestra now. Now if I could get Kong to play nice we would be in good shape. Unless you have any suggestion regarding kong config with this I think I have gone as far as I can with this thread. What I am seeing now is an issue with kong ingress communicating with openunison:

2021/03/01 19:25:52 [error] 22#0: *20344447 upstream timed out (110: Operation timed out) while connecting to upstream, client: 172.16.0.0, server: kong, request: "GET /ou HTTP/2.0", upstream: "https://172.16.127.136:8443/"
172.16.0.0 - - [01/Mar/2021:19:25:52 +0000] "GET /ou HTTP/2.0" 504 51 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:85.0) Gecko/20100101 Firefox/85.0"

@mlbiam
Copy link
Contributor

mlbiam commented Mar 1, 2021

is 172.16.127.136 the IP of the openunison-orchestra pod? Is Kong running inside the cluster?

@bbellrose1
Copy link
Author

Yes that is the orchestra pod:

pod/openunison-orchestra-9c7d69cc5-xmlv6 1/1 Running 0 28m 172.16.127.136

Yes kong is inside the cluster
NAME READY STATUS RESTARTS AGE
pod/ingress-kong-665f6857df-tk68r 2/2 Running 0 45d

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kong-proxy NodePort 10.98.75.102 80:32408/TCP,443:31725/TCP 52d
service/kong-validation-webhook ClusterIP 10.101.134.2 443/TCP 52d

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-kong 1/1 1 1 52d

NAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-kong-665f6857df 1 1 1 52d

@mlbiam
Copy link
Contributor

mlbiam commented Mar 1, 2021

What's your network_policies.enabled in your values.yaml? If it's true, did you update network_policies.ingess.labels to match the namespace that Kong runs in?

@bbellrose1
Copy link
Author

Here is what I currently have (monitoring commented out as I am not running prometheus)

network_policies:
enabled: true
ingress:
enabled: true
labels:
app.kubernetes.io/name: openunison
#monitoring:
#enabled: true
#labels:
#app.kubernetes.io/name: monitoring
apiserver:
enabled: true
labels:
app.kubernetes.io/name: kube-system

So I should add?
labels:
app.kubernetes.io/name: kong

Under the ingress section?

@bbellrose1
Copy link
Author

I added kong to the network policies. I see that it created one, but I still see the timeout errors.

2021/03/01 21:49:47 [error] 22#0: *20496133 upstream timed out (110: Operation timed out) while connecting to upstream, client: 172.16.0.0, server: kong, request: "GET /ou HTTP/2.0", upstream: "https://172.16.127.141:8443/"

Name: allow-from-ingress
Namespace: openunison
Created on: 2021-03-01 15:50:40 -0500 EST
Labels: app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: orchestra
meta.helm.sh/release-namespace: openunison
Spec:
PodSelector: application=openunison-orchestra
Allowing ingress traffic:
To Port: (traffic allowed to all ports)
From:
NamespaceSelector: app.kubernetes.io/name=kong
Not affecting egress traffic
Policy Types: Ingress

@mlbiam
Copy link
Contributor

mlbiam commented Mar 2, 2021

I think what you want is:

network_policies:
  enabled: true
  ingress:
    enabled: true
    labels:
      app.kubernetes.io/name: kong
  monitoring:
    enabled: false
    labels:
      app.kubernetes.io/name: monitoring
  apiserver:
    enabled: false
    labels:
      app.kubernetes.io/name: kube-system

The labels all must match the source namespaces to allow traffic from it. So adding app.kubernetes.io/name: kong while still having app.kubernetes.io/name: openunison would create a policy that says "Only allow traffic from namespaces with the label app.kubernetes.io/name: openunison AND app.kubernetes.io/name: kong".

Also, commenting out the monitoring block will take the defaults, which is enabled: true. Mark it as enabled: false to not allow inbound access from an in-cluster prometheus.

@bbellrose1
Copy link
Author

Ok, I made those changes and redeployed. I am still seeing the timeout errors connecting to the upstream target. So I can see in Kong ingress where it tries to connect to openunison, but I don't see anything in the logs for the orchestra. Is there somewhere else I should look?

2021/03/02 13:03:33 [error] 22#0: *21140768 upstream timed out (110: Operation timed out) while connecting to upstream, client: 172.16.220.64, server: kong, request: "GET /ou HTTP/2.0", upstream: "https://172.16.127.145:8443/"

NAME READY STATUS RESTARTS AGE IP NODE
openunison-operator-6b95dc6574-t7rtn 1/1 Running 0 5m57s 172.16.127.149
openunison-orchestra-9c7d69cc5-nxcnd 1/1 Running 0 5m24s 172.16.127.145

@bbellrose1
Copy link
Author

I see in the logs for the orchestra that it is watching the following cluster IP:

[2021-03-02 13:03:18,628][Thread-14] INFO K8sLoadTrusts - watching https://10.96.0.1:443/apis/openunison.tremolo.io/v1/namespaces/openunison/trusts?watch=true&timeoutSeconds=1

However that is the default kubernetes cluster IP for my system in the default namespace. That would be coordinated from the kubernetes-api no?

[2021-03-02 13:03:18,628][Thread-14] INFO K8sLoadTrusts - watching https://10.96.0.1:443/apis/openunison.tremolo.io/v1/namespaces/openunison/trusts?watch=true&timeoutSeconds=10

Do I need to change the api server mentioned in the values.yaml to be the kong-proxy?

Brian

@bbellrose1
Copy link
Author

I logged into the orchestra and not sure if it is healthy:

openunison@openunison-orchestra-9c7d69cc5-5vkph:/$ curl -L -k -I localhost:8443
curl: (8) Weird server reply
$ grep 172 /etc/hosts
172.16.127.155 openunison-orchestra-9c7d69cc5-5vkph

curl -L -k -I https://172.16.127.155:8443
HTTP/1.1 404 Not Found
Connection: keep-alive
Set-Cookie: openSession=f9a0efd1da854a02662907ce4d31d75f7b50a3ff7; path=/; secure; HttpOnly
Content-Length: 0
Date: Tue, 02 Mar 2021 14:08:34 GMT

@mlbiam
Copy link
Contributor

mlbiam commented Mar 2, 2021

Ok, I made those changes and redeployed. I am still seeing the timeout errors connecting to the upstream target. So I can see in Kong ingress where it tries to connect to openunison, but I don't see anything in the logs for the orchestra. Is there somewhere else I should look?

If you disable network policies entirely by setting network_policies.enabled to false, is Kong able to hit orchestra?

I see in the logs for the orchestra that it is watching the following cluster IP:

[2021-03-02 13:03:18,628][Thread-14] INFO K8sLoadTrusts - watching > https://10.96.0.1:443/apis/openunison.tremolo.io/v1/namespaces/openunison/trusts?watch=true&timeoutSeconds=1

However that is the default kubernetes cluster IP for my system in the default namespace. That would be coordinated from the kubernetes-api no?

This is openunison talking to the API server to see if any changes have come in for trust objects. These are the objects that you create for adding additional clusters or applications and is unrelated to the issues you're seeing now.

@bbellrose1
Copy link
Author

Disabling the network policies and adding an ingress rule helped, but on the orchestrator pod I see:

[2021-03-02 15:49:23,384][XNIO-1 task-1] INFO AccessLog - [NotFound] - UNKNOWN - https://XXXXXXX:31725/ - cn=none - Resource Not Found [172.16.64.138] - [fcb433f6953777147e2962d1acab94a5c30eeee8d]

@mlbiam
Copy link
Contributor

mlbiam commented Mar 2, 2021

[2021-03-02 15:49:23,384][XNIO-1 task-1] INFO AccessLog - [NotFound] - UNKNOWN - https://XXXXXXX:31725/ - cn=none - Resource Not Found [172.16.64.138] - [fcb433f6953777147e2962d1acab94a5c30eeee8d]

is XXXXXXX what you put in your URL bar on your browser? is it the same as your network.openunison_host?

@bbellrose1
Copy link
Author

Yes, Host is the same as what I putting on the browser.

@mlbiam
Copy link
Contributor

mlbiam commented Mar 3, 2021

Are you including the port in network.openunison_host? if so, try removing it.

@bbellrose1
Copy link
Author

That does not seem to help either. I am getting an upsteam timeout...

pstream timed out (110: Operation timed out) while connecting to upstream, client: 172.16.44.192, server: kong, request: "GET / HTTP/2.0", upstream: "http://172.16.127.162:8443/"

@bbellrose1
Copy link
Author

If I am on the orchestra host, shouldn't I be able to curl the site?

openunison@openunison-orchestra-9c7d69cc5-czhcx:/$ curl -L http://localhost:8080/
curl: (7) Failed to connect to localhost port 443: Connection refused
openunison@openunison-orchestra-9c7d69cc5-czhcx:/$ curl -L -k http://localhost:8080/
curl: (7) Failed to connect to localhost port 443: Connection refused
openunison@openunison-orchestra-9c7d69cc5-czhcx:/$ curl -L -k https://localhost:443/
curl: (7) Failed to connect to localhost port 443: Connection refused
openunison@openunison-orchestra-9c7d69cc5-czhcx:/$ curl -L -k https://localhost:8443
openunison@openunison-orchestra-9c7d69cc5-czhcx:/$ curl -I -L -k https://localhost:8443
HTTP/1.1 404 Not Found
Connection: keep-alive
Set-Cookie: openSession=f2d61acc740f61efc3a67a063607687df011bf9db; path=/; secure; HttpOnly
Content-Length: 0
Date: Wed, 03 Mar 2021 18:34:59 GMT

@bbellrose1
Copy link
Author

Tried using the pods IP as well. This is what the openunison service points to. Explains why kong is getting errors as well:

021/03/03 18:59:00 [error] 22#0: *22804688 upstream timed out (110: Operation timed out) while connecting to upstream, client: 172.16.220.64, server: kong, request: "GET / HTTP/2.0", upstream: "https://172.16.127.167:8443/"

NAME READY STATUS RESTARTS AGE IP
openunison-operator-6b95dc6574-8sg49 1/1 Running 0 132m 172.16.127.166
openunison-orchestra-9c7d69cc5-czhcx 1/1 Running 0 124m 172.16.127.167

openunison@openunison-orchestra-9c7d69cc5-czhcx:/$ curl -I -k -L https://172.16.127.167:8443
HTTP/1.1 404 Not Found
Connection: keep-alive
Set-Cookie: openSession=f515cd5f64f532d895d2fe21722adf797cf60a982; path=/; secure; HttpOnly
Content-Length: 0
Date: Wed, 03 Mar 2021 20:23:56 GMT

@bbellrose1
Copy link
Author

Definitely something off with how these network policies are set up I think. I don't have any issues hitting my Kubernetes dashboard so I know that Kong and calico are basically working correctly.

I found that if I login to the orchestrator pod and hit localhost. I can get a response from tremolo, but if I try to hit the service from the operator pod I get a timed out.

_openunison@openunison-orchestra-9c7d69cc5-czhcx:/$ curl -L -i -k https://127.0.0.1:8443/auth
HTTP/1.1 302 Found
Connection: keep-alive
Location: https://127.0.0.1:8443/auth/
Content-Length: 0
Date: Thu, 04 Mar 2021 21:28:40 GMT

HTTP/1.1 500 Internal Server Error
Connection: keep-alive
Set-Cookie: openSession=f09cad44e7e26a43a8ae09643cfdfbb8586882149; path=/; secure; HttpOnly
Content-Type: text/html;charset=UTF-8
Content-Length: 2303
Date: Thu, 04 Mar 2021 21:28:41 GMT

@bbellrose1
Copy link
Author

Seems like there is an issue with the openunison site/dashboard. The server does not respond. Getting 404 errors on the default page. Is there a specific URL that is required? Documentation seems to indicate that nothing is required and a redirect would occur if someone hits the base URL, but indication from the server is that is not happening:
$ wget --debug --no-check-certificate https://172.16.127.167:8443/
Setting --check-certificate (checkcertificate) to 0
Setting --check-certificate (checkcertificate) to 0
DEBUG output created by Wget 1.19.4 on linux-gnu.

Reading HSTS entries from /usr/local/openunison/.wget-hsts
URI encoding = 'ANSI_X3.4-1968'
converted 'https://172.16.127.167:8443/' (ANSI_X3.4-1968) -> 'https://172.16.127.167:8443/' (UTF-8)
Converted file name 'index.html' (UTF-8) -> 'index.html' (ANSI_X3.4-1968)
--2021-03-05 14:49:31-- https://172.16.127.167:8443/
Connecting to 172.16.127.167:8443... connected.
Created socket 5.

---request begin---
GET / HTTP/1.1
User-Agent: Wget/1.19.4 (linux-gnu)
Accept: /
Accept-Encoding: identity
Host: 172.16.127.167:8443
Connection: Keep-Alive

---request end---
HTTP request sent, awaiting response...
---response begin---
HTTP/1.1 404 Not Found
Connection: keep-alive
Set-Cookie: openSession=fcf77a35be2dcabd683c1efdbc03216673057f710; path=/; secure; HttpOnly
Content-Length: 0
Date: Fri, 05 Mar 2021 14:49:31 GMT

---response end---
404 Not Found

@mlbiam
Copy link
Contributor

mlbiam commented Mar 5, 2021

Getting 404 errors on the default page. Is there a specific URL that is required?

yes. You need to hit openunison using the host name as defined in network.openunison_host. You're getting a 404 because there's no app defined for 127.0.0.1/. I think the issue you're having is that you're not trying to access via the standard https port (443). if you do a curl to your kong ingress, do you get a 302 in response? Is the hanging in the browser?

@bbellrose1
Copy link
Author

bbellrose1 commented Mar 5, 2021 via email

@bbellrose1
Copy link
Author

Seems like there is a disconnect between Openunison and the Kubernetes cluster. When Kong receives the request from the browser the Ingress catches it and forwards it to the openunison-orchestra service. Which in turn knows about the pod that openunison is running on. So what I see is the service handing off to the internal IP of the pod

client: 172.16.0.0, server: kong, request: "GET / HTTP/2.0", upstream: "https://172.16.127.167:8443/

And this receives a timeout error. It sounds like you are saying this is occuring because openunison does not know about that host and the service should be redirecting the call to the name provided by the network.openunison_host parameter, is that correct?

However, when I curl the internal IP for that POD and I pass the address as indicated I get a 404 not found:
openunison@openunison-orchestra-9c7d69cc5-czhcx:/$ curl -I -L -k https://172.16.127.167:8443
HTTP/1.1 404 Not Found

If I modify that and pass /auth to the internal IP I seem to get a redirect code indicating it found something. So seems like this may be an openunison issue more than a Kong redirect issue. No?

openunison@openunison-orchestra-9c7d69cc5-czhcx:/$ curl -I -L -k https://172.16.127.167:8443/auth
HTTP/1.1 302 Found
Connection: keep-alive
Location: https://172.16.127.167:8443/auth/
Content-Length: 0
Date: Mon, 08 Mar 2021 20:23:01 GMT

HTTP/1.1 500 Internal Server Error
Connection: keep-alive
Set-Cookie: openSession=f78589a2eb85bfea79ad5f1c12f47b549bfda153f; path=/; secure; HttpOnly
Content-Type: text/html;charset=UTF-8
Content-Length: 2303
Date: Mon, 08 Mar 2021 20:23:01 GMT

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants