I setting up jupyterhub with nginx/letsencrypt/certbot as my https reverse proxy server, but configurable-http-proxy was also installed with jupyterhub. configurable-http-proxy is running, but it doesn’t stop me from reach the hub and logging in. If I don’t need it how do I stop it from executing when the server starts?
So then even if I am running a nginx as my reverse proxy in separate docker container I still need something like the two lines in my jupyterhub_config.py file running on the jupyterhub container?
You typically don’t need any configuration related to CHP unless you are running it separately from the Hub as its own service, in which case you would be setting should_start = False.
Hi @minrk thanks for the reply. So I am trying to track down the reason why my jupyterhub is not spawning servers and I want to rule out CHP. When the jhub first comes up this is the log out put
docker logs 9e0 -f
[I 2019-05-23 02:09:16.879 JupyterHub app:2120] Using Authenticator: jupyterhub.auth.PAMAuthenticator-1.0.0
[I 2019-05-23 02:09:16.879 JupyterHub app:2120] Using Spawner: dockerspawner.swarmspawner.SwarmSpawner-0.11.1
[D 2019-05-23 02:09:16.885 JupyterHub app:1297] Generating new cookie_secret
[I 2019-05-23 02:09:16.886 JupyterHub app:1302] Writing cookie_secret to /srv/jupyterhub/jupyterhub_cookie_secret
[D 2019-05-23 02:09:16.886 JupyterHub app:1424] Connecting to db: sqlite:///jupyterhub.sqlite
[D 2019-05-23 02:09:16.904 JupyterHub orm:718] Stamping empty database with alembic revision 4dc2d5a8c53c
[I 2019-05-23 02:09:16.908 alembic.runtime.migration migration:130] Context impl SQLiteImpl.
[I 2019-05-23 02:09:16.909 alembic.runtime.migration migration:137] Will assume non-transactional DDL.
[I 2019-05-23 02:09:16.937 alembic.runtime.migration migration:356] Running stamp_revision -> 4dc2d5a8c53c
[D 2019-05-23 02:09:16.937 alembic.runtime.migration migration:558] new branch insert 4dc2d5a8c53c
[I 2019-05-23 02:09:17.106 JupyterHub proxy:460] Generating new CONFIGPROXY_AUTH_TOKEN
[D 2019-05-23 02:09:17.196 JupyterHub app:1910] Loading state for alvin from db
[D 2019-05-23 02:09:17.196 JupyterHub app:1926] Loaded users:
alvin admin
[I 2019-05-23 02:09:17.211 JupyterHub app:2337] Hub API listening on http://0.0.0.0:8000/hub/
[I 2019-05-23 02:09:17.211 JupyterHub app:2339] Private Hub API connect url http://jupyterhubserver:8000/hub/
[W 2019-05-23 02:09:17.213 JupyterHub proxy:642] Running JupyterHub without SSL. I hope there is SSL termination happening somewhere else...
[I 2019-05-23 02:09:17.213 JupyterHub proxy:645] Starting proxy @ http://:8000
[D 2019-05-23 02:09:17.213 JupyterHub proxy:646] Proxy cmd: ['configurable-http-proxy', '--ip', '', '--port', '8000', '--api-ip', '127.0.0.1', '--api-port', '8001', '--error-target', 'http://jupyterhubserver:8000/hub/error']
[D 2019-05-23 02:09:17.220 JupyterHub proxy:561] Writing proxy pid file: jupyterhub-proxy.pid
02:09:17.989 - info: [ConfigProxy] Proxying http://*:8000 to (no default)
02:09:17.992 - info: [ConfigProxy] Proxy API at http://127.0.0.1:8001/api/routes
02:09:17.993 - error: [ConfigProxy] Uncaught Exception Error: listen EADDRINUSE: address already in use :::8000
at Server.setupListenHandle [as _listen2] (net.js:1259:14)
at listenInCluster (net.js:1307:12)
at Server.listen (net.js:1395:7)
at Object.<anonymous> (/opt/conda/lib/node_modules/configurable-http-proxy/bin/configurable-http-proxy:202:20)
at Module._compile (internal/modules/cjs/loader.js:816:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:827:10)
at Module.load (internal/modules/cjs/loader.js:685:32)
at Function.Module._load (internal/modules/cjs/loader.js:620:12)
at Function.Module.runMain (internal/modules/cjs/loader.js:877:12)
at internal/main/run_main_module.js:21:11
[D 2019-05-23 02:09:18.204 JupyterHub proxy:681] Proxy started and appears to be up
[D 2019-05-23 02:09:18.205 JupyterHub proxy:314] Fetching routes to check
[D 2019-05-23 02:09:18.208 JupyterHub proxy:765] Proxy: Fetching GET http://127.0.0.1:8001/api/routes
[I 2019-05-23 02:09:18.227 JupyterHub proxy:319] Checking routes
[I 2019-05-23 02:09:18.227 JupyterHub proxy:399] Adding default route for Hub: / => http://jupyterhubserver:8000
[D 2019-05-23 02:09:18.228 JupyterHub proxy:765] Proxy: Fetching POST http://127.0.0.1:8001/api/routes/
[I 2019-05-23 02:09:18.232 JupyterHub app:2422] JupyterHub is now running at http://:8000
Is the one error : 02:09:17.993 - error: [ConfigProxy] Uncaught Exception Error: listen EADDRINUSE: address already in use :::8000 an issue
this is the output from the container:
root@jupyterhubserver:/srv/jupyterhub# netstat -lep
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN root 23235855 1/python
tcp 0 0 localhost:8001 0.0.0.0:* LISTEN root 23229864 12/node
tcp 0 0 127.0.0.11:41921 0.0.0.0:* LISTEN root 23228198 -
udp 0 0 127.0.0.11:38602 0.0.0.0:* root 23228197 -
Active UNIX domain sockets (only servers)
Proto RefCnt Flags Type State I-Node PID/Program name Path
The problem is that your jhub process is already listening on port 8000 and the CHP is trying to use that port as well.
Could you post a minimal config file, container image name and steps to reproduce this? That might help others figure out why both are trying to listen on the same port.
container image jupyterhub/jupyterhub-onbuild:latest (this will include my JH_config, which is in the same folder with the Dockerfile)
snippet from JHub_config
## The public facing port of the proxy
c.JupyterHub.hub_port = 8000
## The public facing ip of the whole application (the proxy)
c.JupyterHub.hub_ip = '0.0.0.0'
## The ip for this process
c.JupyterHub.hub_connect_ip = 'jupyterhubserver'
## revers proxy setting for nginx
#c.JupyterHub.base_url = '/hub/'
# Defaults to an empty set, in which case no user has admin access.
c.JupyterHub.spawner_class = 'dockerspawner.SwarmSpawner'
##possibly delete
#c.SwarmSpawner.jupyterhub_service_name = "jupyterhubserver"
network_name = os.environ['DOCKER_NETWORK_NAME']
c.SwarmSpawner.network_name = network_name
c.SwarmSpawner.use_internal_ip = True
# Pass the network name as argument to spawned containers
c.SwarmSpawner.extra_host_config = {'network_mode': network_name}
c.SwarmSpawner.host_ip = '0.0.0.0'
notebook_dir = os.environ.get('NOTEBOOK_DIR') or '/home/jovyan/work'
c.SwarmSpawner.notebook_dir = notebook_dir
# Mount the real user's Docker volume on the host to the notebook user's
# notebook directory in the container
c.SwarmSpawner.volumes = { 'jupyterhub-user-{username}': notebook_dir }
# Remove containers once they are stopped
c.SwarmSpawner.remove_containers = True
# For debugging arguments passed to spawned containers
c.SwarmSpawner.debug = True
I started this project with an older guide and I have been just modifying the config file as I figuring my way through this.
would this be simple way of addressing the issue: c.ConfigurableHTTPProxy.api_url = 'http://proxy:8001'?
I have three steps (I use to do this with a series of “docker create xxx” and shell scripts
version: '3.5'
services:
jupyterhubserver:
image: sgsupdocker/jupyterhub-onbuild:052219.2
hostname: jupyterhubserver
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- nfsvolume:/home/jovyan/work
networks:
jupyterhub:
aliases:
- jupyterhubserver
environment:
DOCKER_NETWORK_NAME: jupyterhub
NOTEBOOK_DIR: /home/jovyan/work
deploy:
replicas: 1
placement:
constraints:
- node.role == manager
ports:
- 8000:8000
nginx:
image: linuxserver/letsencrypt:latest
hostname: nginx
networks:
jupyterhub:
aliases:
- nginx
ports:
- 443:443
- 80:80
environment:
PUID: 1000
PGID: 1000
TZ: US/Arizona
EMAIL: sirace@asu.edu
URL: sgsup-jupyterhub.duckdns.org
SUBDOMAINS: -wildcard
VALIDATION: duckdns
DUCKDNSTOKEN: redacted
volumes:
- /etc/jupyterhub/letsencrypt_container_nginx.conf:/config/nginx/site-confs/default
- nginx_volume:/config
networks:
jupyterhub:
driver: overlay
attachable: true
name: jupyterhub```
thanks for the insights
would it make sense to just delete by current jupyterhub_config.py file and just start over? If so can shoot me link to a good starter config file using JHUB on docker swarm. I have seen a few, but a recommendation would be nice
thanks
should it be the case that the nginx-proxy catches all incoming request from the world and then hands that to CHP which intern hands that to JHUB? The JHUB will spawn a notebook server and the internal network coms between JHUB and servers will be handled by CHP?
Yes. Nothing should be talking directly to the Hub process. Bypassing the proxy causes requests to be routed to the wrong place. The way to view jupyterhub from outside as a multi-process application whose public face (the only one anything outside jupyterhub should connect to) is configurable-http-proxy. The Hub in this case is a private internal implementation detail.
This is where there’s a misunderstanding:
## The public facing port of the proxy
c.JupyterHub.hub_port = 8000
## The public facing ip of the whole application (the proxy)
c.JupyterHub.hub_ip = '0.0.0.0'
Those are not the public-facing ip and port. Those are the ip and port of the Hub process behind the proxy. They also happen to be the default values for the proxy’s public interface, so this configuration has instructed the hub and proxy to both be listening on *:8000.
I suspect what you want is to still set c.JupyterHub.hub_ip = '0.0.0.0' (required to allow servers to connect directly to the hub), but not set c.JupyerHub.hub_port, or at least set it to a value that’s not already assigned. The Hub’s private port does not need to be exposed publicly; it will be accessible to other containers on the docker network.
Generally, yes. Ports do not need to be exposed to be accessed from within a docker network. Expose is only necessary for access from outside, which is only one port for all of JupyterHub: the public port of CHP (or nginx in your case).