After updating JupyterHub 0.8.1 to 1.3.0.Not able to spawn docker images with network name as "host" in jupyterhub_config.py. #3533

Bug description

We were installing jupyterhub 0.8.1 with network type in jupyter_config.py file as “host”. With this we are able to spawn docker images properly .But when I update the jupyterhub version to 1.3.0 with network type as “host” in jupyterhub_config.py. We are not able to spawn docker images.

Expected behaviour

Docker images created should spawn without issues with network type as “host” with jupyterhub version :1.3.0.
Let us know if any configuration needs to corrected to make docker images spawn properly in jupyterhub.

Actual behaviour

Docker images created failing to spawn with network type as “host” with jupyterhub version :1.3.0.
**Exception faced when network type used is “host”: ** The ‘ip’ trait of a Server instance expected a unicode string, not the NoneType None.

How to reproduce

Install jupyterhub version 1.3.0 with network type as “host” and docker image with notebook kernel.
Start jupyter notebook
Try to spawn the image.
Getting below error:
Error: HTTP 500: Internal Server Error (Error in Authenticator.pre_spawn_start: TraitError The ‘ip’ trait of a Server instance expected a unicode string, not the NoneType None.)

image

but when I change network type as “bridge” in jupyterhub_config.py then i am able to successfully spawn images But we need the network type as “host” as we having dependency on host network type for creating spark session in docker images.

  1. Go to ‘…’
  2. Click on ‘…’
  3. Scroll down to ‘…’
  4. See error

Your personal set up

Attaching jupyterhub_config.py file:

  • OS:
  • Version(s):
  • Full environment
# paste output of `pip freeze` or `conda list` here
  • Configuration
# jupyterhub_config.py 




import os
import socket

c.PAMAuthenticator.open_sessions = False
# Configuration file for jupyterhub.
c.JupyterHub.pid_file = '/folder1/miniconda3/envs/jupyterhub/etc/jupyterhub.pid'
c.JupyterHub.logo_file = '/folder1/miniconda3/envs/jupyterhub/share/jupyter/hub/static/images/sampleimage_logo.png'
os.environ['OTDS_URL'] = 'http://inhyd-mag211-otds-launchpad.otxlab.net:8080'
c.Juyterhub.base_url = '/'
c.LocalOTDSOAuthenticator.login_service = 'folder1 Directory Service'
c.JupyterHub.cookie_secret_file = '/folder1/miniconda3/envs/jupyterhub/etc/jupyterhub_cookie_secret'
c.JupyterHub.db_url = '/folder1/miniconda3/envs/jupyterhub/etc/jupyterhub.sqlite'
# In order to activate OTDS integration, uncomment the below line
c.JupyterHub.authenticator_class = 'oauthenticator.LocalOTDSOAuthenticator'
# To enable kerberos authentication in jupyterhub uncomment below line
c.LocalOTDSOAuthenticator.client_id = 'notebook_175' 
c.LocalOTDSOAuthenticator.client_secret = 'vomSAHty68e98Y6FO2iOqVVz33t8odY7'
c.LocalOTDSOAuthenticator.username_key = 'name'
c.LocalOTDSOAuthenticator.callback_logout_url = 'http://10.96.94.175:8000/hub/login'
c.LocalOTDSOAuthenticator.oauth_callback_url = 'http://10.96.94.175:8000/hub/oauth_callback'
c.LocalOTDSOAuthenticator.resource_id = "231ecd3a-40b6-4d8d-a5f0-c8d85b5f2993"
c.LocalOTDSOAuthenticator.resource_name = "m4_demo_notebook"
c.Authenticator.admin_users = {"sampleimage"}
c.JupyterHub.proxy_api_ip = '0.0.0.0'
c.JupyterHub.hub_port = 8090
notebook_dir = '/home/jupyter/work'
c.DockerSpawner.notebook_dir = notebook_dir

c.JupyterHub.log_level = 'DEBUG'

# Enable debug-logging of the single-user server
c.Spawner.debug = True

# Enable debug-logging of the single-user server
c.LocalProcessSpawner.debug = False

# pass the maprticket file name here
MAPR_TICKET_FILE_PATH = ''
maprticket = "NO FILE FOUND"
if os.path.isfile(MAPR_TICKET_FILE_PATH) and os.access(MAPR_TICKET_FILE_PATH, os.R_OK):
  f = open(MAPR_TICKET_FILE_PATH, "r")
  maprticket = f.read()
else:
  print ("Either the file is missing or not readable")

# in prod environment bda and spark master url are set in Ambari service
c.JupyterHub.spawner_class = 'dockerspawner.DockerSpawner'
c.DockerSpawner.environment = {'ELECTRON_URL':'http://10.96.209.56:8110/',
                              'PUBLISH_MODEL':'https://10.96.94.175:8081/publish/modelPublish',
                              'CLIENT_ID': 'notebook_client_175',
                              'CLIENT_SECRET':'vomSAHty68e98Y6FO2iOqVVz33t8odY7',
                              'SPARK_HOST':'10.96.94.156',
                              'RESOURCE_MANAGER_ADDRESS':'10.96.94.156:8032',
                              'HISTORY_LOG_DIRECTORY':'hdfs://clouderamaster631.lab.folder1.com:8020/user/spark/applicationHistory',
                              'HISTORY_SERVER_ADDRESS':'10.96.94.156:18088',
                              'PYSPARK_PYTHON':'/opt/miniconda2/envs/python3/bin/python',
                              'DRIVER_MEMORY' :'4g',
                              'MAPR_TICKET_FILE':maprticket,
                              'RESOURCE_MANAGER_HOST':'{10.96.94.156}',
                              'GIT_USER_REPO':notebook_dir,
                              'DOMAIN_NAME':'folder1.com',
                              'LIVY_SERVER':'http://10.96.94.89:8998/',
                              'SPARKMAGIC_IGNORE_SSL_ERRORS':'false',
                              'PUBLISH_TO_RESTAPI':'true',
                              'EXTRA_SPARK_ARGS':'--num-executors 40 --executor-cores 2 --executor-memory 2GB --conf spark.dynamicAllocation.minExecutors=1 --conf spark.dynamicAllocation.maxExecutors=40 --conf spark.dynamicAllocation.enabled=true --conf spark.shuffle.service.enabled=true --conf spark.executor.instances=2' #Include here any custom spark required parameter e.g. --conf spark.executor.instances=4
                              }

c.DockerSpawner.volumes = {'/folder1/notebooks/{username}': notebook_dir, }

c.DockerSpawner.remove_containers = True

c.DockerSpawner.extra_host_config = {'network_mode': 'host'}
c.DockerSpawner.use_internal_ip = True
c.DockerSpawner.network_name = 'host'

DISTRIBUTION='DISTRIBUTION_NAME'
if DISTRIBUTION == 'mapr':
    c.newDockerSpawner .images = {'images':[{'name':'sampleimage-notebook-base', 'description':'Python 3'},
                                           {'name':'sampleimage-notebook', 'description':'Spark kernels (Pyspark, Scala, SparkSQL)'},
                                           {'name':'sampleimage-notebook-tensorflow', 'description':'TensorFlow 2.5.0'},
                                           {'name':'sampleimage-notebook-pytorch', 'description':'PyTorch 1.5'}
                                           ]}
else:
    c.newDockerSpawner .images = {'images':[{'name':'sampleimage-notebook-base', 'description':'Python 3'},
                                           {'name':'sampleimage-notebook', 'description':'Spark kernels (Pyspark, Scala, SparkSQL)'},
                                           {'name':'sampleimage-notebook-tensorflow', 'description':'TensorFlow 2.5.0'},
                                           {'name':'sampleimage-notebook-pytorch', 'description':'PyTorch 1.5'},
                                           {'name':'sampleimage-notebook-sparkmagic', 'description':'Spark Magic'}
                                           ]}

c.newDockerSpawner .memoryLimit = "4G"
c.newDockerSpawner .cpuLimit = "2"

c.JupyterHub.spawner_class = 'dockerspawner.newDockerSpawner '

import netifaces
docker0 = netifaces.ifaddresses('docker0')
docker0_ipv4 = docker0[netifaces.AF_INET][0]
c.JupyterHub.hub_ip = docker0_ipv4['addr']

from jupyterhub.utils import random_port
import subprocess
import os
NOTEBOOK_SERVICE_UID = 1000
def create_dir_hook(spawner):
    spawner.environment['NB_USER'] = 'jupyter' # get system user
    spawned_user = spawner.user.name # get the spawned user
    spawner.environment['ACCESS_TOKEN'] = os.getenv(spawned_user+"_accesstoken")
    spawner.environment['REFRESH_TOKEN'] = os.getenv(spawned_user+"_refreshtoken")
    spawner.environment['OTDS_URL'] = os.getenv("OTDS_URL")
    submit_job_as_single_user='True'
    if(submit_job_as_single_user.lower() == 'false'):
      if(spawner.image=='sampleimage-notebook' or spawner.image=='sampleimage-notebook-sparkmagic' ):
        spawner.environment['NB_USER'] = spawned_user # get the spawned user
    volume_path = os.path.join('/folder1/notebooks', spawned_user)
    uid = NOTEBOOK_SERVICE_UID
    spawner.port = random_port()
    if not os.path.exists(volume_path):
        # create a directory with umask 0755 
        # hub and container user must have the same UID to be writeable
        # still readable by other users on the system
        os.makedirs(volume_path, 0o755)
        #subprocess.Popen ("git init" , shell=True , cwd=volume_path).communicate()
        pass
    # the user folder should be owner by the user configured in the docker container
    # if not, the end user will be not able to create any notebook
    os.chown(volume_path, uid, uid)


# attach the hook function to the spawner
c.Spawner.pre_spawn_hook = create_dir_hook
c.DockerSpawner.debug = True



  • Logs
# paste relevant logs here, if any

Unhandled error starting mnbuser's server: The 'ip' trait of a Server instance expected a unicode string, not the NoneType None.

Hi! Could you show us the versions of all components, e.g. pip list, conda list, etc?

Below are versions with packages
Jupyterhub version 1.3.0
dockerspawner==0.11.1
oauthenticator==0.10.0
netifaces==0.10.9

pip list
Package Version


asn1crypto 1.4.0
bower 0.0.0
certifi 2021.5.30
cffi 1.14.4
chardet 4.0.0
conda 4.10.2
conda-package-handling 1.7.2
cryptography 2.5
idna 2.10
pip 21.1.2
pycosat 0.6.3
pycparser 2.20
pyOpenSSL 19.0.0
PySocks 1.7.1
requests 2.25.1
ruamel-yaml 0.15.71
setuptools 49.6.0.post20210108
six 1.16.0
tqdm 4.32.1
urllib3 1.25.8
wheel 0.36.2
conda list
Name Version Build Channel
_libgcc_mutex 0.1 main
asn1crypto 1.4.0 pyh9f0ad1d_0 conda-forge
bower 0.0.0 pypi_0 pypi
bzip2 1.0.8 h7b6447c_0
ca-certificates 2021.5.30 ha878542_0 conda-forge
certifi 2021.5.30 py36h5fab9bb_0 conda-forge
cffi 1.14.4 py36h211aa47_0 conda-forge
chardet 4.0.0 py36h5fab9bb_1 conda-forge
conda 4.10.2 py36h5fab9bb_0 conda-forge
conda-package-handling 1.7.2 py36he6145b8_0 conda-forge
cryptography 2.5 py36hb7f436b_1 conda-forge
idna 2.10 pyh9f0ad1d_0 conda-forge
libffi 3.2.1 hd88cf55_4
libgcc-ng 9.1.0 hdf63c60_0
libstdcxx-ng 9.1.0 hdf63c60_0
libxml2 2.9.9 hea5a465_1
lz4-c 1.8.1.2 h14c3975_0
lzo 2.10 h49e0be7_2
ncurses 5.9 10 conda-forge
openssl 1.0.2u h516909a_0 conda-forge
pip 21.1.2 pyhd8ed1ab_0 conda-forge
pycosat 0.6.3 py36he6145b8_1005 conda-forge
pycparser 2.20 pyh9f0ad1d_2 conda-forge
pyopenssl 19.0.0 py36_0 conda-forge
pysocks 1.7.1 py36h5fab9bb_3 conda-forge
python 3.6.0 2 conda-forge
python_abi 3.6 1_cp36m conda-forge
readline 6.2 0 conda-forge
requests 2.25.1 pyhd3deb0d_0 conda-forge
ruamel_yaml 0.15.71 py36h14c3975_1000 conda-forge
setuptools 49.6.0 py36h5fab9bb_3 conda-forge
six 1.16.0 pyh6c4a22f_0 conda-forge
sqlite 3.13.0 1 conda-forge
tk 8.5.19 2 conda-forge
tqdm 4.32.1 py_0
urllib3 1.25.8 py36h9f0ad1d_1 conda-forge
wheel 0.36.2 pyhd3deb0d_0 conda-forge
xz 5.2.4 h14c3975_4
yaml 0.1.7 had09818_2
zlib 1.2.11 h7b6447c_3
zstd 1.3.7 h0b5b093_0

Thanks! Would you mind trying with the latest JupyterHub (1.4.1) and DockerSpawner (12.0.0)? It may not fix the problem, but it’ll help rule out some additional problems.

I also noticed you’re using a custom spawner

Can you share the source for this class?

I have updated the versions of jupyterhub and Dockerspawner and tried Still facing the same issue.

[E JupyterHub pages:261] Failed to spawn single-user server with form
    Traceback (most recent call last):
      File "/folder1/miniconda3/envs/jupyterhub/lib/python3.9/site-packages/jupyterhub/handlers/pages.py", line 257, in post
        return await self._wrap_spawn_single_user(
      File "/folder1/miniconda3/envs/jupyterhub/lib/python3.9/site-packages/jupyterhub/handlers/pages.py", line 314, in _wrap_spawn_single_user
        raise web.HTTPError(
    tornado.web.HTTPError: HTTP 500: Internal Server Error (Error in Authenticator.pre_spawn_start: TraitError The 'ip' trait of a Server instance expected a unicode string, not the NoneType None.)

Source Code for custom spawner


from dockerspawner import DockerSpawner
from traitlets import Unicode,Set,Dict

class newDockerSpawner(DockerSpawner):
    images= Dict( {"images":[{"name":"imagen1","description":"description 1"}]},
                    config=True,
                    help="""Image list""")
    memoryLimit = Unicode("2G",
        config=True,
        help="Notebook container memory limit")
    cpuLimit = Unicode("1",
        config=True,
        help="Notebook container CPU limit")
    form_template = Unicode(
        """<label for="stack">Select your desired Docker image</label>
        <select class="form-control" name="stack" required autofocus>
        {input_template}
        </select>
        <label for="memoryLimit">Notebook memory limit</label>
        <input name="memoryLimit" placeholder="e.g. 2G" value="{memory}"></input>
        <label for="cpuLimit">Notebook CPU limit</label>
        <input name="cpuLimit" placeholder="1" value="{cpu}"></input>
        """,
        config = True,
        help = """Template to use to construct options_form text. {input_template} is replaced with
            the result of formatting input_template against each item in the profiles list."""
        )

    input_template = Unicode("""
        <option value="{0}">{1}</option>""",
        config = True,
        )

    def _options_form_default(self):
        text = ''.join([ self.input_template.format(image['name'],image['description']) for image in self.images['images']])
        return self.form_template.format(input_template=text,memory=self.memoryLimit, cpu=self.cpuLimit)
   
    def options_from_form(self, formdata):
         options = {}
         options['stack'] = formdata['stack']
         container_image = ''.join(formdata['stack'])
         mem_limit = formdata.get('memoryLimit', [''])[0].strip()
         cpu_limit = float(formdata.get('cpuLimit', [''])[0].strip())
         print("SPAWN: " + container_image + " IMAGE" )
         self.container_image = container_image
         self.mem_limit = mem_limit
         self.cpu_limit = cpu_limit
         return options

Could you also try updating to the latest oauthenticator?

If that doesn’t work it would be helpful to go back to a more basic configuration as it’s quite complex. To begin with could you try something like the DummyAuthenticator with an unmodified DockerSpawner?