503 GET /jupyter socket hang up

Hi all

We have deployed jupyterhub into a docker image - running in a kubernetes cluster.
We are using jupyterhub-kubespawner - and it works well.

However we have created an ‘extension’ to the kubespawner - which generates a menu of ‘available published notebooks’. The menu is then made available to the user as a picklist. User picks an item - and the docker image for that notebook is spawned into a Pod.

The menu of notebooks is pulled from a request to an internally deployed service (‘AppService’ [Notebook registry’]).

Sequence of events (supporting jupyterhub log messages in quotes):
User authenticates successfully and is logged in.
“JupyterHub base:813] User logged in: xxxxx”
“302 POST /jupyter/hub/login → /jupyter/hub/spawn”
“Recording first activity for <User(xxxxx 0/1 running)>”
“Checking access via scope servers”
“Argument-based access to /jupyter/hub/spawn via servers”

then my log message
“Asking AppStore at URL [http://appservice.default.svc.cluster.local/api/services/appservice/getapps] for current registered docker images/versions”
[URL is live and working correctly]

“[ConfigProxy] error: 503 GET /jupyter socket hang up” (repeatedly).

No images are spawned, no menu is displayed and we finally result in 502 Bad Gateway.

Can anyone advise on the 503 ConfigProxy issue here and how to resolve ?

Thanks in advance

Are you able to reproduce the problem with the unmodified KubeSpawner?

If not, is your code available in a public repository?

Can you turn on debug logging and share the logs?

Thanks for replying!

Yes unmodified kubespawner is working.

I’m now leaning towards creating an external services definition for the appservices url

So I still cannot access a service during the pre-spawn-hook phase.

I am trying to define the menu contents of the dropdown picklist - using a blend of ryanlovett/imagespawner and jupyter_kubespawner classes.

I am looking to achieve a menu that can dynamically update with new menu item content (either by push, or by poll/pull).

I have created a metadata webservice which gathers notebook data (owner, sponsor, docker image ver, notebook description etc). I have tried the following:

  1. call out (requests.get) to the webservice during the pre_spawn_hook phase - to redefine the spawner.all_profiles [upon webservice data response]
  2. call out (requests.get) to the webservice during spawner creation (spawner.init)
  3. call (requests.get) to the webservice during spawner.start

In all 3 cases Ive tried completely external URL address; a cluster local URL address [the webservice is also deployed in the same kube cluster], and I’ve tried the declared ‘external’ service jupyter approach.

In the first 2 cases - immediately after the requests.get - an exception is thrown with 503 server hangup.

In the last case - no comms at all with the service, no means of accessing it outside of jupyterhub (e.g. curl from a shell window).

I’m a little stuck here - and looking for guidance. Anyone out there who can advise ?

[jupyterhub is deployed into kubernetes [helm chart]]

thanks in advance

  • Where is this webservice running? Is it a JupyterHub managed service, or is it an independent external service?
  • What happens if you kubectl exec -it <hub-pod> -- sh into the hub pod and try and make a request to this webservice from the command line, e.g. using curl?

So to my discredit - I hadn’t noticed (the configuration was inherited and generated as a series of HEREDOCS using RUN echo docker instructions to create a jupyter server config file). - that the configurartion was inconsistent in that (1) it was using an Authenticator; (2) it also include a configuration line ‘auto_login=True’

Removing the ‘auto_login’ configuration - removes the 503 errors.