Jupyterhub on k8s not running | Redirect loop detected | 500 : Internal Server Error

HI All,

I created a custom image with dockerfile below : and deployed through helm Jupyterhub config file.

  1. Config file below:

proxy:
secretToken:

singleuser:
defaultUrl: “/lab”
profileList:
- display_name: “Data Science Spark environment”
description: “Data Science libraries with the spark”
kubespawner_override:
image: :tag
memory:
limit: 32G
guarantee: 8G
cpu:
limit: 8
guarantee: 2
storage:
extraVolumes:
- name: shm-volume
emptyDir:
medium: Memory
extraVolumeMounts:
- name: shm-volume
mountPath: /dev/shm
dynamic:
storageClass: default
capacity: 10Gi
extraEnv:
EDITOR: “vim”
<<<

  1. Created k8s ingress record to pointing to hub service.

  2. When I am logging in – I am getting this redirect error:

image

  1. Logs mention below for ssikarwa user:

500 : Internal Server Error

Redirect loop detected. Notebook has jupyterhub version 0.9.1, but the Hub expects 1.2.2. Try installing jupyterhub==1.2.2 in the user environment if you continue to have problems.

Any support will be highly appreciated.

Dockerfile for above image is mentioned below:

From jupyter/all-spark-notebook:2343e33dec46

ENV HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
ENV PATH $PATH:$HADOOP_HOME/bin
ENV LD_LIBRARY_PATH=$HADOOP_HOME/lib/native

RUN rm -rf /opt/hadoop/ &&
cd /opt/ &&
wget http://archive.apache.org/dist/hadoop/common/hadoop-3.1.2/hadoop-3.1.2.tar.gz > /dev/null 2>&1 &&
tar -C /opt/ -xvf hadoop-3.1.2.tar.gz > /dev/null 2>&1 &&
mv hadoop-3.1.2 hadoop && rm -f hadoop-3.1.2.tar.gz
&& rm -rf /opt/hadoop/share/doc && rm -f /opt/hadoop/share/hadoop/tools/lib/aws-java-sdk-bundle-1.11.271.jar

RUN rm -rf /opt/spark/ &&
cd /opt/ && rm -rf spark && rm -rf spark-2.4.0-bin-without-hadoop &&
wget http://archive.apache.org/dist/spark/spark-2.4.3/spark-2.4.3-bin-without-hadoop.tgz > /dev/null 2>&1 &&
tar -C /opt/ -xvf spark-2.4.3-bin-without-hadoop.tgz > /dev/null 2>&1 && rm -f spark-2.4.3-bin-without-hadoop.tgz &&
mv spark-2.4.3-bin-without-hadoop spark &&
cd /opt/spark/jars/

ENV SPARK_DIST_CLASSPATH=/opt/hadoop/etc/hadoop:/opt/hadoop/share/hadoop/common/lib/:/opt/hadoop/share/hadoop/common/:/opt/hadoop/share/hadoop/hdfs:/opt/hadoop/share/hadoop/hdfs/lib/:/opt/hadoop/share/hadoop/hdfs/:/opt/hadoop/share/hadoop/mapreduce/lib/:/opt/hadoop/share/hadoop/mapreduce/:/opt/hadoop/share/hadoop/yarn:/opt/hadoop/share/hadoop/yarn/lib/:/opt/hadoop/share/hadoop/yarn/

ENV SPARK_HOME=/usr/local/spark
ENV PATH $PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin

RUN pip3 install boto3
RUN pip3 install --upgrade pip

RUN cd /usr/local/spark/jars/ &&
COPY spark-dependencies.jar spark-dependencies.jar

RUN pip3 install wheel
RUN pip3 install cython
#COPY entrypoint.sh entrypoint.sh
COPY requirements.txt requirements.txt
RUN chmod 777 requirements.txt
#RUN chmod -R 777 entrypoint.sh

RUN pip3 install --no-cache-dir -r requirements.txt

Have you tried the suggestion from the error message? jupyter/all-spark-notebook:2343e33dec46 is a very old image, can you use a more recent one?

Thanks for the reply @manics : Mistake was mine – I was connecting by ingress to hub service instead of proxy-public.

It worked after that.