Spawner default url is not working anymore

Hello everyone, I’ve upgraded JupyterHub from v1.5.0 to v3.0.0. The spawner default url is not working after the upgrade.

          # displays a notebook with information when launching
          c.Spawner.default_url = '/user/{username}/lab/tree/Information.ipynb'

This is what I’ve used before. Please let me know if there is anything that needs to be changed.

I am not sure how it was in JupyterHub 1.*, but in v3.0 and above, it should be

c.Spawner.default_url = '/lab/tree/Information.ipynb'

assuming that you are launching the JupyterLab in a directory where the notebook Information.ipynb exists in the root of that directory.

1 Like

Thank you for your response @mahendrapaipuri. The issue is still the same.

Do we need to set the notebook directory?

After loading I can see this:

https://sandbox.infra.au/user/sureshuppada12@gmail.com/tree?

After the upgrade, it loads the notebook interface as default. I used to see the lab before the change. Any idea?

For the start, try to set c.Spawner.default_url = '/lab' and normally this should give you a JupyterLab interface.

If you want the JupyterLab file browser to open in /home/foo/, try passing in the c.Spawner.args = ['--ContentsManager.root_dir=/home/foo'] to make the file browser opens in that directory. If you have a notebook at /home/foo/Information.ipynb and setting c.Spawner.default_url = '/lab/tree/Information.ipynb' should open that notebook when you get redirected from hub to the JupyterLab.

Hello @mahendrapaipuri ,

This is my config:

          # Set the log level by value or name
          c.JupyterHub.log_level = 'DEBUG'

          # Set cookies - jupyterhub-session-id and jupyterhub-hub-login - to less than a day
          c.Jupyterhub.cookie_max_age_days = 0.90
          c.JupyterHub.tornado_settings['cookie_options'] = dict(expires_days=0.90)

          # Enable debug-logging of the single-user server
          c.Spawner.debug = True

          # Enable debug-logging of the single-user server
          c.LocalProcessSpawner.debug = False
          c.Spawner.cmd = ['jupyterhub-singleuser']

          # displays a notebook with information during the launch
          c.Spawner.default_url = '/lab'

          # Override spawner timeout - in seconds
          c.KubeSpawner.start_timeout = 600
          c.KubeSpawner.http_timeout = 60

          # Override options_form
          c.KubeSpawner.options_form = custom_options_form

        templates: |
          c.JupyterHub.logo_file = u'/etc/jupyterhub/custom/branding/logo-inline.svg'

I’ve set this c.Spawner.default_url = '/lab' in my config. Still that didn’t work.

Where exactly are you getting redirected to after spawning?

I see that you have both LocalProcessSpawner and KubeSpawner in your config. So, I assume you are using some sort of profiles to spawn single user servers. Are you sure that these profiles are not overriding spawner.default_url?

After the upgrade while spawning it is redirected to notebooks.

Before without using any default url Lab used to spawn.

Even tried this c.KubeSpawner.default_url = '/lab'. No use.

Extra config:

      extraConfig:
        spawner: |
          #!/usr/bin/env python3

          import json
          import os
          import sys
          import base64
          import time
          import requests
          from jupyterhub.handlers import LogoutHandler
          from tornado import web
          # install 'cognitojwt' packages to hub container - require to validate user claim
          try:
            import cognitojwt
          except ImportError:
            import subprocess
            subprocess.call([sys.executable, "-m", "pip", "install", "wheel"])
            subprocess.call([sys.executable, "-m", "pip", "install", "kubernetes"])
            subprocess.call([sys.executable, "-m", "pip", "install", "--user", "cognitojwt[sync]"])
          finally:
            sys.path.append(os.path.expanduser('~') + "/.local/lib/python3.9/site-packages")
            import cognitojwt

          def enum(**enums):
            return type('Enum', (), enums)

          from kubernetes import client, config

          async def verify_claims(self, user):
            # Retrieve user authentication info, decode, and verify claims
            try:
              auth_state = await user.get_auth_state()
              # self.log.info(f"auth_state: {auth_state}")
              if auth_state is None:
                raise ValueError("auth_state is empty")

              verified_claims = cognitojwt.decode(
                auth_state['access_token'],
                os.getenv('COGNITO_REGION', 'us-west-2'),
                os.getenv('JUPYTERHUB_USERPOOL_ID'),
                testmode=False  # Enable token expiration check
              )
              return verified_claims
            except cognitojwt.CognitoJWTException as err:
              self.log.error(f"Cliam verification issue: {err}")
              raise web.HTTPError(401, "Session is expired!")

          async def custom_options_form(self):
            self.log.info(f"logged in user: {self.user.name}")

Hope this helps.

I am sorry but I cant see anything obvious in the config that you shared. The only thing I can think of is the your custom spawner is changing the default_url.

I have never worked with JupyterHub 1.*, so, I dont know how it was working back then. Maybe core developers can shed some light!!

I’ve tried removing the default url but the behaviour is same. Thank you @mahendrapaipuri for your support and time. I’ll post it in the community.

Can you show us your full configuration? Are you able to reproduce your problem with a minimal configuration, without any customisations other than setting default_url?

Hello @manics,

This is my configuration:

apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: jupyterhub
  namespace: sandbox
spec:
  values:
    hub:
      extraConfig:
        spawner: |
          #!/usr/bin/env python3

          import json
          import os
          import sys
          import base64
          import time
          import requests
          from jupyterhub.handlers import LogoutHandler
          from tornado import web
          # install 'cognitojwt' packages to hub container - require to validate user claim
          try:
            import cognitojwt
          except ImportError:
            import subprocess
            subprocess.call([sys.executable, "-m", "pip", "install", "wheel"])
            subprocess.call([sys.executable, "-m", "pip", "install", "kubernetes"])
            subprocess.call([sys.executable, "-m", "pip", "install", "--user", "cognitojwt[sync]"])
          finally:
            sys.path.append(os.path.expanduser('~') + "/.local/lib/python3.9/site-packages")
            import cognitojwt

          def enum(**enums):
            return type('Enum', (), enums)

          from kubernetes import client, config

          async def verify_claims(self, user):
            # Retrieve user authentication info, decode, and verify claims
            try:
              auth_state = await user.get_auth_state()
              # self.log.info(f"auth_state: {auth_state}")
              if auth_state is None:
                raise ValueError("auth_state is empty")

              verified_claims = cognitojwt.decode(
                auth_state['access_token'],
                os.getenv('COGNITO_REGION', 'us-west-2'),
                os.getenv('JUPYTERHUB_USERPOOL_ID'),
                testmode=False  # Enable token expiration check
              )
              return verified_claims
            except cognitojwt.CognitoJWTException as err:
              self.log.error(f"Cliam verification issue: {err}")
              raise web.HTTPError(401, "Session is expired!")

          async def custom_options_form(self):
            self.log.info(f"logged in user: {self.user.name}")

            cognito_user_groups = enum(
              DEVELOPMENT='dev-group',
              DEFAULT='default-group',
              NONE='None'
            )

            # setup default profile_list for all users
            default_profile_list = [
              {
                'default': True,
                'display_name': 'Default environment',
                'description': '2 Cores, 16 GB Memory',
                'kubespawner_override': {
                  'mem_guarantee': '14G',
                  'mem_limit': '14G',
                  'cpu_guarantee': 1.5,
                  'cpu_limit': 1.5,
                  'node_selector': {'nodesize': 'L'},
                }
              },
            ]
            self.profile_list = default_profile_list

            dev_profile_list = [
              {
                'default': False,
                'display_name': 'Large environment - test',
                'description': '4 Cores, 32 GB Memory',
                'kubespawner_override': {
                  'mem_guarantee': '29G',
                  'mem_limit': '29G',
                  'cpu_guarantee': 3.5,
                  'cpu_limit': 3.5,
                  'node_selector': {'nodesize': 'XL'},
                }
              },
            ]

            try:
              # Read user access token to collect user group info
              verified_claims = await verify_claims(self, self.user)
              user_group_info = verified_claims.get('cognito:groups', [])
              self.log.info(f"{self.user.name} user belongs to group(s): {(','.join(user_group_info))}")

              # Use logic here to decide how to configure user profile_list based on user-group
              if cognito_user_groups.DEVELOPMENT in user_group_info:
                self.profile_list.extend(dev_profile_list)

              # Set extra labels
              # Add extra labels - labels are used for cilium network policy and cost
              extra_labels = {
                'username': '{username}',
                'hub.jupyter.org/network-access-hub': 'true'
              }
              self.singleuser_extra_labels = extra_labels

              # Return options_form - Let KubeSpawner inspect profile_list and decide what to return
              return self._options_form_default()
            except (TypeError, IndexError, ValueError, KeyError) as err:
              self.log.error(f"Syntaxt error: {err}")
              raise web.HTTPError(400, "Something went wrong. Coud not load profiles")

          # Set the log level by value or name
          c.JupyterHub.log_level = 'DEBUG'

          # Set cookies - jupyterhub-session-id and jupyterhub-hub-login - to less than a day
          c.Jupyterhub.cookie_max_age_days = 0.90
          c.JupyterHub.tornado_settings['cookie_options'] = dict(expires_days=0.90)

          # Enable debug-logging of the single-user server
          c.Spawner.debug = True

          # Enable debug-logging of the single-user server
          c.LocalProcessSpawner.debug = False
          c.Spawner.cmd = ['jupyterhub-singleuser']

          # displays a notebook with information during the launch
          c.Spawner.default_url = '/lab'

          # Override spawner timeout - in seconds
          c.KubeSpawner.start_timeout = 600
          c.KubeSpawner.http_timeout = 60

          # Override options_form
          c.KubeSpawner.options_form = custom_options_form

        templates: |
          c.JupyterHub.logo_file = u'/etc/jupyterhub/custom/branding/logo-inline.svg'

Is there anything wrong with this config?

I dont have experience with KubeSpawner but I dont see anything obviously wrong in that config. When you start your single user server, what is the environment variable JUPYTERHUB_DEFAULT_URL set to?

Didn’t set any environment variable.

Have you upgraded your singleuser image too? Do you see the same problem using one a recent docker-stacks image?

This is my JupyterHub deployment config:

Here I’m trying to upgrade Jupyerhub helm from1.2.0 to 2.0.0, k8s-hub, k8s-image-awaiter, k8s-network-tools from 1.2.0 to 2.0.0 and jupyterhub/configurable-http-proxy from 4.5.0 to 4.5.3.

## Helm Charts: https://github.com/jupyterhub/helm-chart
# Source Repository: https://github.com/jupyterhub/jupyterhub
# https://github.com/jupyterhub/zero-to-jupyterhub-k8s/blob/master/jupyterhub/values.yaml
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: jupyterhub
  namespace: sandbox
spec:
  interval: 5m
  releaseName: jupyterhub
  chart:
    spec:
      chart: jupyterhub
      version: 2.0.0
      sourceRef:
        kind: HelmRepository
        name: jupyterhub-repository
  valuesFrom:
    - kind: Secret
      name: jupyterhub
      valuesKey: values.yaml
      optional: false
  values:
    # hub relates to the hub pod, responsible for running JupyterHub its configured Authenticator class KubeSpawner,
    # and its configured Proxy class ConfigurableHTTPProxy
    hub:
      #cookieSecret - Injected by Flux
      image:
        name: k8s-hub
        tag: 2.0.0
      resources:
        requests:
          cpu: 500m # 0m - 1000m
          memory: 2Gi # 200Mi - 4Gi
      pdb:
        enabled: false
        minAvailable: 1
      # Injected by Flux - Authentication and extraEnv
      networkPolicy:
        enabled: false
      authenticatePrometheus: false # disable authentication for Prometheus endpoint
      initContainers:
        - name: git-clone-templates
          image: alpine/git:latest
          args:
            - clone
            - --single-branch
            - --branch=main
            - --depth=1
            - --
            - https://github.com/sandbox-templates.git
            - /etc/jupyterhub/custom
          securityContext:
            runAsUser: 0
          volumeMounts:
            - name: custom-templates
              mountPath: /etc/jupyterhub/custom
      extraVolumes:
        - name: custom-templates
          emptyDir: {}
      extraVolumeMounts:
        - name: custom-templates
          mountPath: /etc/jupyterhub/custom
      templatePaths: ['/etc/jupyterhub/custom/templates']
      # can be override - env specific
      templateVars: {}
      config:
        KubeSpawner:
          delete_pvc: false
      # can be override - env specific
      extraConfig: {}
    # proxy relates to the proxy pod, the proxy-public service, and the autohttps pod and proxy-http service.
    proxy:
      #secretToken - Injected by Flux
      chp:
        image:
          name: jupyterhub/configurable-http-proxy
          tag: 4.5.3
        resources:
          requests:
            cpu: 500m # 0m - 1000m
            memory: 256Mi # 100Mi - 600Mi
        networkPolicy:
          enabled: false
        pdb:
          enabled: false
          minAvailable: 1
      traefik:
        image:
          name: traefik
          tag: v2.4.11
        resources:
          requests:
            cpu: 500m # 0m - 1000m
            memory: 512Mi # 100Mi - 1.1Gi
        networkPolicy:
          enabled: false
        pdb:
          enabled: false
          minAvailable: 1
      service:
        type: ClusterIP
      https:
        enabled: true
        type: offload

    #ingress - Injected by Flux

    scheduling:
      userScheduler:
        enabled: true
        resources:
          requests:
            cpu: 30m # 8m - 45m
            memory: 512Mi # 100Mi - 1.5Gi
      podPriority:
        enabled: true
      userPlaceholder:
        enabled: false
      corePods:
        nodeAffinity:
          matchNodePurpose: require
      userPods:
        nodeAffinity:
          matchNodePurpose: require

    # prePuller relates to the hook|continuous-image-puller DaemonsSets
    prePuller:
      continuous:
        enabled: false
      # hook relates to the hook-image-awaiter Job and hook-image-puller DaemonSet
      hook:
        enabled: false
        pullOnlyOnChanges: true
        image:
          name: jupyterhub/k8s-image-awaiter
          tag: 2.0.0

    # cull relates to the jupyterhub-idle-culler service, responsible for evicting inactive singleuser pods.
    # for jupyterhub-idle-culler as documented here:
    # https://github.com/jupyterhub/jupyterhub-idle-culler#as-a-standalone-script
    cull:
      enabled: true
      users: true               # --cull-users
      removeNamedServers: false # --remove-named-servers
      timeout: 10800            # --timeout - 3 hours
      every: 600                # --cull-every - 10 mins
      maxAge: 0                 # --max-age

    # singleuser relates to the configuration of KubeSpawner which runs in the hub pod,
    # and its spawning of user pods such as jupyter-myusername.
    singleuser:
      networkTools:
        image:
          name: jupyterhub/k8s-network-tools
          tag: 2.0.0
      networkPolicy:
        enabled: false
      nodeSelector:
        nodesize: 'L'
      defaultUrl: "/lab"
      memory:
        limit: 15G
        guarantee: 14G
      cpu:
        limit: 1.7
        guarantee: 1.5
      cloudMetadata:
        # block set to true will append a privileged initContainer using the
        # iptables to block the sensitive metadata server at the provided ip.
        blockWithIptables: true
        ip: 169.256.169.255
      image:
        name: sandbox
        tag: 0.0.5
      startTimeout: 600
#      Injected by Flux - using secrets
#      extraEnv:
#        DB_HOSTNAME: ${db_hostname}
#        DB_USERNAME: ${db_username}
#        DB_PASSWORD: ${db_password}
#        DB_DATABASE: ${db_name}
#        AWS_DEFAULT_REGION: ${region}
#        AWS_NO_SIGN_REQUEST: "YES"
      # can be override - env specific
      storage:
        homeMountPath: /home/jovyan
        dynamic:
          storageClass: encrypted-gp2
          pvcNameTemplate: claim-{username}
          volumeNameTemplate: volume-{username}
        extraVolumes:
          - name: notebooks
            emptyDir: {}
          - name: jupyter-notebook-config
            configMap:
              name: jupyter-notebook-config
        extraVolumeMounts:
          - name: notebooks
            mountPath: /notebooks
          - name: jupyter-notebook-config
            mountPath: /etc/jupyter/jupyter_notebook_config.py
            subPath: jupyter_notebook_config.py

Any suggestions would be greatly appreciated.

Did you see my previous question?

Sorry, @manics, I missed that. I guess we’re not updating any singleuser image. Things I’m trying to upgrade is Jupyerhub helm from 1.2.0 to 2.0.0, k8s-hub, k8s-image-awaiter, k8s-network-tools from 1.2.0 to 2.0.0 and configurable-http-proxy from 4.5.0 to 4.5.3.