The primary issue is that the logged in user is unable to see launch progress and unable to delete their own server. As such scopes seem to be off.
The jupyterhub is launched using a helm chart (version: 2.0.1-0.dev.git.6012.h458d566chelm call). Here is the templated helm config. The hub itself is running on AWS with an elastic load balancer, and i point to it through a CNAME mapping from my DNS provider. We deploy our own k8s cluster (not through EKS).
The user is dynamically authenticated through Github and authorized through a json file. I have also done org-based authorization without overriding the authenticator, and the same issue holds. I am also seeing the CORS issue that has been discussed on this forum, but i am unsure how to fix that in our setup. We are not running an nginx, just a classic load balancer deployed by the helm chart.
Update: I can confirm that using lets encrypt solves this issue. however, we cannot switch to that mode easily, so the standard aws load balancer offloading is still going to be relevant to us.
The CORS problem (mismatch between http and https) is the root problem. This means your user isn’t authenticated, and therefore doesn’t have the needed scopes.
You’ll need to look into your load-balancer configuration to find out how to set the correct headers so JupyterHub knows https is used.
thanks @manics. isn’t the load-balancer configuration + https offloading happening with helm chart ? by switching the https to use letsencrypt, instead of the aws certificate, the problem resolves. hence, perhaps something in the helm chart for jupyterhub, in relation to aws that needs to be adjusted.
It depends how you’ve configured everything. If you’re terminating HTTPS at the load balancer then you need to look at your load-balancer annotations to set some forwarded headers.
The good news is in JupyterHub 4.0 the CORS detection is improved, so if you’ve already got a deployment working with Lets Encrypt the easiest option is to stay with that for now.