JupyterHub not loading Recent Jupyter Docker-Stacks

Hi all,

I was doing some maintenance on my JupterHub run via Kubernetes on Google Cloud. I’ve been using the guide found on Zero to Kubernetes and have successfully upgraded to the newest helm chart (0.9.1) released on July 17th. Everything was running great until I tried to pull in an updated docker container for the singeruser spawner. I had been using an older release from jupyter/datascience-notebook on Docker in the past and was looking to pull a newer container to get a few quality of life improvements. However, when pulling in release: latest, JupyterHub was unable to start the singleuser server, resulting in a [Warning] Back-off restarting failed container error. Switching back to my prior container reverted to the correct behavior and was able to get things running again.

Through some trial and error I was able to determine jupyter/datascience-notebook:lab-1.2.5 (about 5 months old) is the most recent container that appears to work. Any ideas on what’s changed since lab-1.2.5 that might be causing this behavior? Is anyone else experiencing this issue?

Helm Version: 2.16.10
Kubernetes Server Version: 1.15.12-gke.2

Hi! Please could you show us your logs from JupyterHub and the failed singleuser pods? It’d also be helpful if you could show us your Z2JH config with secrets redacted.

Hi @manics thanks for you help. I’ve pulled some of the logs (something I’ve never done before, so a great learning opportunity) and as a result I was able to identify the issue and solve the problem! In case there’s someone else out there with this problem, I will share my process and solution below:

When trying to pull the logs from the user pod, I get the following error, which actually led to the key discovery:

$ kubectl --namespace=jhub logs jupyter-gibson
Traceback (most recent call last):
  File "/opt/conda/bin/jupyter-labhub", line 6, in <module>
    from jupyterlab.labhubapp import main
ModuleNotFoundError: No module named 'jupyterlab.labhubapp'

I was ultimately able to get the logs from the Google Cloud Platform logging utility. The only error present on my user pod when attempting to open the image is:

 insertId: "egg3z7uzywzzm318z"  
 labels: {
  k8s-pod/app: "jupyterhub"   
  k8s-pod/chart: "jupyterhub-0.9.1"   
  k8s-pod/component: "singleuser-server"   
  k8s-pod/heritage: "jupyterhub"   
  k8s-pod/hub_jupyter_org/network-access-hub: "true"   
  k8s-pod/release: "jhub"   
 logName: "projects/<project-name>/logs/stderr"  
 receiveTimestamp: "2020-08-18T13:39:41.680323404Z"  
 resource: {
  labels: {
   cluster_name: "datahub-fall2020"    
   container_name: "notebook"    
   location: "us-east4-a"    
   namespace_name: "jhub"    
   pod_name: "jupyter-gibson"    
   project_id: "<project-name>"    
  type: "k8s_container"   
 severity: "ERROR"  
 textPayload: "Traceback (most recent call last):
  File "/opt/conda/bin/jupyter-labhub", line 6, in <module>
    from jupyterlab.labhubapp import main
ModuleNotFoundError: No module named 'jupyterlab.labhubapp'
 timestamp: "2020-08-18T13:39:02.298210883Z"  

It certainly seemed like jupyter-labhub was the culprit. When going to copy/paste my config.yaml I found this suspect section that I must have added some time ago, long enough that I forget why I included it to begin with:

  jupyterlab: |
    c.Spawner.cmd = ['jupyter-labhub']
  templates: |
    c.JupyterHub.template_paths = ['/etc/jupyterhub/custom/custom']

And I saw the jupyter-labhub reference come up again. Commenting out these 5 lines fixes the issue, and I’m up and running again. It’d odd that it didn’t cause issues on earlier builds, but I’m guessing some changes to JupyterLab made this call unneeded and in fact problematic to keep included.

Thanks for the help and idea to dig into the logs. It was a great opportunity to learn a bit more about Kubernetes and Google Cloud Platform.

1 Like