Notebook/terminal websocket 400 (Z2JH 0.9-dev)

I’m testing out JH 1.0 and the new named servers and I’m running into a lot of issues. Seems that terminals and notebooks own’t open at all.

The console lists preact.min.js.map, index.js.map, and compat.min.js.map as 404ing, but that seems to happen on a non-named server. The console then goes on to say the web socket returned a 400, and this only happens on a named server.

I haven’t been able to get much out of the container. I have c.Application.log_level set to 0 but all I get is the following:

[I 2019-05-20 18:36:23.798 SingleUserNotebookApp management:320] New terminal with automatic name: 1
[I 2019-05-20 18:36:23.802 SingleUserNotebookApp log:174] 200 POST /user/waskd6/testing/api/terminals (waskd6@128.206.116.250) 312.13ms                                                                     
[I 2019-05-20 18:36:23.811 SingleUserNotebookApp log:174] 200 GET /user/waskd6/testing/tree? (waskd6@128.206.116.250) 5.24ms                                                                                
[I 2019-05-20 18:36:23.841 SingleUserNotebookApp log:174] 200 GET /user/waskd6/testing/terminals/1 (waskd6@128.206.116.250) 19.89ms                                                                         
[I 2019-05-20 18:36:23.930 SingleUserNotebookApp log:174] 200 GET /user/waskd6/testing/api/config/common?_=1558395501439 (waskd6@128.206.116.250) 6.64ms                                                    
[I 2019-05-20 18:36:23.934 SingleUserNotebookApp log:174] 200 GET /user/waskd6/testing/api/config/terminal?_=1558395501438 (waskd6@128.206.116.250) 3.20ms                                                  
[W 2019-05-20 18:36:23.936 SingleUserNotebookApp log:174] 400 GET /user/waskd6/testing/terminals/websocket/1 (waskd6@128.206.116.250) 1.08ms  

which isn’t much to go off of. Seems to be happening in both chrome and safari. Ideas?

Hi,

I don’t actually know the answer or where to start with this. Could you post the config file you use, whether you are using JupyterLab or notebook, etc. Basically the steps someone would need to reproduce this locally. That will make it easier for others to help out.

Good point, here’s our ocean of config:

config.yml
hub:
  image:
    name: 'XXX/jupyter/zero-to-jupyterhub-k8s/dev/hub'
    tag: 'latest'
  imagePullPolicy: Always
  imagePullSecret:
    enabled: True
    password: XXXX
    username: XXXX
    registry: https://XXX
  nodeSelector:
    workset: cpu
  extraConfig:
    1: |
       c.KubeSpawner.profile_list.append({'display_name': 'Debug Image', 'kubespawner_override': {'image': 'XXX/jupyter/docker-stacks/dev/singleuser-dev:latest-shim'}})
    2: |
       c.JupyterHub.allow_named_servers = True
  spawner:
    git_server: 'XXXXXXXX.lan'

singleuser:
  cmd: null
  extraEnv:
    DEVDEBUG: "yespls"
  image:
    name: 'XXX/jupyter/docker-stacks/dev/singleuser-minimal'
    tag: 'latest-shim'
  imagePullPolicy: Always
  imagePullSecret:
    enabled: True
    password: XXXX
    username: XXXX
    registry: https://XXX
  networkTools:
    image:
      tag: 0.7.0
  storage:
    type: none
    capacity: 30Gi
    extraVolumes:
      - name: 'home-{username}'
        hostPath:
          path: '/dsa-dev/home/{username}'
      - name: 'data'
        hostPath:
          path: '/dsa/data'
      - name: 'scripts'
        hostPath:
          path: '/dsa-dev/scripts'
    extraVolumeMounts:
      - name: 'home-{username}'
        mountPath: '/dsa/home/{username}'
      - name: 'data'
        mountPath: '/dsa/data'
        readOnly: True
      - name: 'scripts'
        mountPath: '/dsa/scripts'
        readOnly: True

  profileList:
   -  display_name: 'Data Science - Core (7600, 8610, 8620, 8630, 8640, 8650)'
      default: True
      kubespawner_override:
        image: 'XXX/jupyter/course-containers/dev/singleuser-core:latest-shim'
        cpu_guarantee: 0.5
        cpu_limit: 2
        mem_limit: '4G'
        node_selector:
          workset: cpu
   -  display_name: 'Data Science - R/Dataviz'
      default: False
      kubespawner_override:
        image: 'XXX/jupyter/course-containers/dev/singleuser-dataviz:latest-shim'
        cpu_guarantee: 0.5
        cpu_limit: 2
        mem_limit: '4G'
        node_selector:
          workset: cpu
   -  display_name: 'Data Science - AWS'
      default: False
      kubespawner_override:
        image: 'XXX/jupyter/course-containers/dev/singleuser-aws:latest-shim'
        cpu_guarantee: 0.5
        cpu_limit: 2
        mem_limit: '4G'
        node_selector:
          workset: cpu
   -  display_name: 'Data Science - GCP'
      default: False
      kubespawner_override:
        image: 'XXX/jupyter/course-containers/dev/singleuser-gcp:latest-shim'
        cpu_guarantee: 0.5
        cpu_limit: 2
        mem_limit: '4G'
        node_selector:
          workset: cpu
   -  display_name: 'Data Science - DMIR'
      default: False
      kubespawner_override:
        image: 'XXX/jupyter/course-containers/dev/singleuser-dmir:latest-shim'
        cpu_guarantee: 0.5
        cpu_limit: 2
        mem_limit: '4G'
        node_selector:
          workset: cpu
   -  display_name: 'Minimal'
      default: False
      kubespawner_override:
        image: 'XXX/jupyter/docker-stacks/dev/singleuser-minimal:latest-shim'
        cpu_guarantee: 0.5
        cpu_limit: 2
        mem_limit: '4G'
        node_selector:
          workset: cpu
   -  display_name: 'Datascience'
      default: False
      kubespawner_override:
        image: 'XXX/jupyter/docker-stacks/dev/singleuser-datascience:latest-shim'
        cpu_guarantee: 0.5
        cpu_limit: 2
        mem_limit: '4G'
        node_selector:
          workset: cpu
   -  display_name: 'Allspark'
      default: False
      kubespawner_override:
        image: 'XXX/jupyter/docker-stacks/dev/singleuser-allspark:latest-shim'
        cpu_guarantee: 0.5
        cpu_limit: 2
        mem_limit: '4G'
        node_selector:
          workset: cpu
#   -  display_name: 'Tensorflow-CPU'
#      default: False
#      kubespawner_override:
#        image: 'XXX/jupyter/docker-stacks/dev/singleuser-tf:latest-shim'
#        cpu_guarantee: 0.5
#        cpu_limit: 2
#        mem_limit: '4G'
#        node_selector:
#          workset: cpu
#   -  display_name: 'Tensorflow-GPU'
#      default: False
#      kubespawner_override:
#        image: 'XXX/jupyter/docker-stacks/dev/singleuser-tf-gpu:latest-shim'
#        cpu_guarantee: 0.5
#        cpu_limit: 4
#        mem_limit: '16G'
#        extra_resource_guarantees:
#          nvidia.com/gpu: 1
#        extra_resource_limits:
#          nvidia.com/gpu: 1
#        node_selector:
#          workset: gpu

prePuller:
  hook:
    enabled: false
    image:
      tag: 0.7.0

debug:
  enabled: true

auth:
  type: 'ldap'
  ...


proxy:
  secretToken: 'XXXXX'
  service:
    loadBalancerIP: 172.17.5.67
  nodeSelector:
    workset: cpu
  https:
    enabled: true
    type: manual
    hosts:
      - xxx.missouri.edu
    manual:
      key: |
        -----BEGIN PRIVATE KEY-----
        ...
jupyterhub_config.py

from labldapauthenticator import LDAPAuthenticator
c.Authenticator.post_auth_hook = LDAPAuthenticator.build_profile



from tornado import gen

spawner_git_server = get_config('hub.spawner.git_server')

@gen.coroutine
def spawner_config(spawner):
  """
  We are running ON THE HUB! Need to configure the mounts and stuff for the end user.

  The startup scripts will have to do the rest.

  1. Setup mounts and owners and paths and IDs
  2. Pass data off to the container for build (spawner.environment dict)
  """
  auth_state = yield spawner.user.get_auth_state()

  # Entrypoint of root seems to get smashed by this. Duh. Whoops.
  spawner.uid = 0
  spawner.gid = 0

  #spawner.uid = auth_state['profile']['uid']
  #spawner.gid = auth_state['profile']['gid']
  spawner.fs_gid = auth_state['profile']['gid'] # I still don't get this one
  spawner.supplemental_gids = auth_state['profile']['group_membership']
  spawner.environment['NB_USER'] = spawner.user.name
  spawner.environment['NB_UID'] = str(auth_state['profile']['uid'])
  spawner.environment['NB_GID'] = str(auth_state['profile']['gid'])

  if spawner.user.admin:
      spawner.environment['GRANT_SUDO'] = '1'

  spawner.environment['GIT_SSH_HOST'] = spawner_git_server

  spawner.environment['NOTEBOOK_DIR'] = '/home/{username}/jupyter'.format(username=spawner.user.name)                                                                              

  spawner.environment['GROUP_BUILD'] = ' '.join([':'.join([v, str(k)]) for k, v in auth_state['profile']['group_map'].items()])                                                    
  spawner.environment['GROUP_MEMBER'] = ' '.join([str(x) for x in auth_state['profile']['group_membership']])                                                                      

  return

c.Spawner.pre_spawn_hook = spawner_config

I found the issue yesterday! We had some ancient nginx config for proxying our connections that used some regex to fiddle with websocket support, and the extra path component for named servers wasn’t passing through. All is well now :slight_smile:

1 Like

I had a similar issue and managed to solve it using this SO post : Kubernetes ingress websockets connection issue - Stack Overflow

This is because I have a baremetal cluster and no loadbalancer and my service is using ClusterIP instead.
The ingress annotation must be modified to allow websockets connexions as described in the post.