We’re running Z2JH 0.9 installed via helm on EKS and are in the process of upgrading a customized Jupyter environment based on older Jupyter containers from the Jupyter Docker Stacks project. Our new containers do not open JupyterLab by default. Our old containers, do.
In testing, it appears the stock jupyter/datascience-notebook:latest
container does not open JupyeterLab either i.e. it opens the legacy notebook environment.
Our helm chart config includes the following:
singleuser:
defaultUrl: "/lab"
extraEnv:
JUPYTER_ENABLE_LAB: "yes"
We’ve been running Z2JH for a while (since 0.82 days) and the cluster has been stable since. Are we missing a recent change in required config to ensure JupyterLab is the default?
The Z2JH config looks alright. You don’t need to set the environment variable, but there’s no harm having it:
https://zero-to-jupyterhub.readthedocs.io/en/latest/customizing/user-environment.html#use-jupyterlab-by-default
It might be helpful to see your full configuration, just in case some other change in 0.9.0 has an indirect effect.
Do you know the Docker image tag of the image that does work?
Hey Simon,
Do you know the Docker image tag of the image that does work?
Our current image uses jupyter/datascience-notebook:45f07a14b422
as a base. I’ve just tested this base image directly and it loads JupyterLab as expected.
Redacted config below and our post-Start lifecycle hook script below that.
hub:
# db:
# upgrade: true
extraEnv:
OAUTH2_AUTHORIZE_URL: XXXXXXXXXXXXXX
OAUTH2_TOKEN_URL: XXXXXXXXXXXXXX
OAUTH_CALLBACK_URL: XXXXXXXXXXXXXX
auth:
admin:
access: false
users:
- admin
- XXXXXXXXXXXXXX
type: custom
custom:
className: oauthenticator.generic.GenericOAuthenticator
config:
login_service: "XXXXXXXXXXXXXX"
client_id: "XXXXXXXXXXXXXX"
client_secret: "XXXXXXXXXXXXXX"
token_url: XXXXXXXXXXXXXX
userdata_url: XXXXXXXXXXXXXX
userdata_params: {'state': 'state'}
username_key: "preferred_username"
scope: ['openid', 'profile', 'email']
singleuser:
nodeSelector:
jhub: "true"
memory:
guarantee: 50G
limit: 365G
cpu:
guarantee: 7
storage:
capacity: 250Gi
dynamic:
storageClass: gp2-jhub
persistentVolumeReclaimPolicy: Retain
extraVolumes:
- name: shared-projects
persistentVolumeClaim:
claimName: pvc-jhub-shared-projects
- name: dshm
emptyDir:
medium: Memory
extraVolumeMounts:
- name: shared-projects
mountPath: /home/shared-projects
- name: dshm
mountPath: /dev/shm
defaultUrl: "/lab"
schedulerStrategy: pack
extraEnv:
JUPYTER_ENABLE_LAB: "yes"
EDITOR: "vim"
GRANT_SUDO: "yes"
uid: 0
cmd: null
image:
name: XXXXXXXXXXXXXX.dkr.ecr.us-west-2.amazonaws.com/jupyterlab-XXXXXXXXXXXXXX
tag: "2XXXXXXXXXXXXXX"
profileList:
- display_name: "[CPU] Standard Env (XXXXXXXXXXXXXX)"
description: "8+ cores, 50GB+ RAM"
default: True
- display_name: "[CPU] New Standard Env ALPHA (2020070703)"
description: "8+ cores, 50GB+ RAM - JLab 2.1 Environment ALPHA"
kubespawner_override:
image: XXXXXXXXXXXXXX.dkr.ecr.us-west-2.amazonaws.com/jupyterlab-XXXXXXXXXXXXXX:2020070703
memory:
guarantee: 50G
limit: 365G
cpu:
guarantee: 8
- display_name: "[CPU] SparkMagic Only (XXXXXXXXXXXXXX)"
description: "Lightweight Env for Remote Spark Use. 1 core, 4GB RAM"
memory:
guarantee: 4G
limit: 5G
cpu:
guarantee: 1
- display_name: "Test Env"
description: "Test Env"
kubespawner_override:
image: jupyter/datascience-notebook:45f07a14b422
memory:
guarantee: 50G
limit: 365G
cpu:
guarantee: 8
lifecycleHooks:
postStart:
exec: { "command": ["/bin/sh", "-c", "/home/shared-projects/ds-lab-environment/jupyter-post-start-hook/finalize-jupyter-env.sh"]}
cull:
timeout: 86400
every: 3600
prePuller:
continuous:
enabled: false
hook:
enabled: false
proxy:
secretToken: 'XXXXXXXXXXXXXX'
https:
enabled: true
type: offload
service:
type: LoadBalancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-XXXXXXXXXXXXXX"
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "team=XXXXXXXXXXXXXX,cogs=0"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "XXXXXXXXXXXXXX"
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
#!/bin/bash
if [ ! -e $HOME/shared-projects ]; then
ln -s /home/shared-projects $HOME/shared-projects
sudo chown jovyan:users $HOME/shared-projects
fi
if [ ! -e $HOME/.condarc ]; then
cat > .condarc << EOF
channels:
- defaults
envs_dirs:
- $HOME/my-conda-envs/
pkgs_dirs:
- $HOME/.conda/pkgs
EOF
sudo chown jovyan:users $HOME/.condarc
fi
if [ ! -e $HOME/.conda ]; then
mkdir $HOME/.conda
mkdir -p $HOME/.conda/pkgs/cache
touch $HOME/.conda/environments.txt
touch $HOME/.conda/pkgs/urls.txt
chown -R jovyan:users $HOME/.conda
echo "export CONDA_PKGS_DIRS=$HOME/.conda/pkgs" >> $HOME/.bashrc
fi
if [ ! -e $HOME/my-conda-envs ]; then
mkdir $HOME/my-conda-envs
sudo chown jovyan:users $HOME/my-conda-envs
fi
if [ ! -e $HOME/.sparkmagic ]; then
mkdir $HOME/.sparkmagic
cp /home/shared-projects/ds-lab-environment/jupyter-post-start-hook/example_config.json $HOME/.sparkmagic/config.json
sudo chown -R jovyan:users $HOME/.sparkmagic
fi
sudo chown jovyan:users $HOME
So far I can’t see anything obviously wrong
We installed a 0.9 beta prior to upgrading to 0.9. We also had 0.8.2 prior to the beta. I’ve just noticed that the config maps and deployments are 400 days old i.e. unpatched through the various upgrade cycles. Should we expect them to have been updated?
There were minimal changes to the configmap template so it’s possible your configmap didn’t change:
$ git diff 0.8.2 0.9.0 -- ./jupyterhub/templates/hub/configmap.yaml
diff --git a/jupyterhub/templates/hub/configmap.yaml b/jupyterhub/templates/hub/configmap.yaml
index c6a3856..4471545 100644
--- a/jupyterhub/templates/hub/configmap.yaml
+++ b/jupyterhub/templates/hub/configmap.yaml
@@ -34,3 +34,7 @@ data:
{{- $_ := set $values "Release" (pick .Release "Name" "Namespace" "Service") }}
values.yaml: |
{{- $values | toYaml | nindent 4 }}
+
+ {{- /* Glob files to allow them to be mounted by the hub pod */ -}}
+ {{- /* key=filename: value=content */ -}}
+ {{- (.Files.Glob "files/hub/*").AsConfig | nindent 2 }}
The config-map includes a number of python files, which do look like they’ve changed a reasonable amount, even through the 0.9 beta period.
{{- (.Files.Glob "files/hub/*").AsConfig | nindent 2 }}
Should these have been updated through a helm upgrade
?