I am a bit confused, it is possible I accidentally (I mean while not paying attention) switched version back to 0.9.0.
And no, the LDAP servr did not change and is not the problem. I took it out of the config, and the problem persisted.
Here, a minimal config
ingress:
annotations:
ingress.kubernetes.io/proxy-body-size: 64m
ingress.kubernetes.io/proxy-connect-timeout: "30"
ingress.kubernetes.io/proxy-read-timeout: "3600"
ingress.kubernetes.io/proxy-send-timeout: "3600"
kubernetes.io/ingress.class: nginx
enabled: true
hosts:
- hc7-demo.internal.sanger.ac.uk
tls:
- hosts:
- hc7-demo.internal.sanger.ac.uk
secretName: tls-jupyter
proxy:
secretToken: NNNNNNNNNNNNNNN
service:
type: ClusterIP
it does the same,
in the proxy log
08:55:16.591 [ConfigProxy] error: 503 GET /user/hc7/oauth_callback?code=2nwBxqVsWzFQdD79nxMtrAApt3kYok&state=eyJ1dWlkIjogImE5ZGNmNTQ4OWZmNzRlODBiMjY5YzZlY2EwNjhmZGMxIiwgIm5leHRfdXJsIjogIi91c2VyL2hjNy90cmVlPyJ9 socket hang up
in the hub log
I 2021-02-03 08:54:26.523 JupyterHub log:181] 302 GET /hub/api/oauth2/authorize?client_id=jupyterhub-user-hc7&redirect_uri=%2Fuser%2Fhc7%2Foauth_callback&response_type=code&state=[secret] → /user/hc7/oauth_callback?code=[secret]&state=[secret] (hc7@192.168.199.103) 30.54ms
[I 2021-02-03 08:54:53.141 JupyterHub proxy:319] Checking routes
[I 2021-02-03 08:55:16.604 JupyterHub log:181] 200 GET /hub/error/503?url=%2Fuser%2Fhc7%2Foauth_callback%3Fcode%3D2nwBxqVsWzFQdD79nxMtrAApt3kYok%26state%3DeyJ1dWlkIjogImE5ZGNmNTQ4OWZmNzRlODBiMjY5YzZlY2EwNjhmZGMxIiwgIm5leHRfdXJsIjogIi91c2VyL2hjNy90cmVlPyJ9 (@10.42.2.75) 8.21ms
the user pod launched (with no password, of course, because there is no authentication). This is presumably a proxy issue