How to run single-user pod in headless mode so that I can run spark jobs on an external Hadoop Cluster


I am running a on-prem k8s cluster and am running JuypterHub on it.

I can successfully submit the job to an yarn queue, however the job will fail because users notebook pod IP is not resolvable and therefore it can’t talk back to the spark driver running on said pod and I get an error like:

Caused by: Failed to connect to podIP:33630 at at at at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:204) at org.apache.spark.rpc.netty.Outbox$$anon$ at org.apache.spark.rpc.netty.Outbox$$anon$ at at java.util.concurrent.ThreadPoolExecutor.runWorker( at java.util.concurrent.ThreadPoolExecutor$ at Caused by: pod-ip

I believe I’m missing something in my setup that will allow yarn to talk back to the spawned notebook pods on the kubernetes cluster. I have read that I might need to run singleUser image or the hub image in headless mode, but I am not sure how to do that. I have tried looking through kubespawner and juypterhub configuration for a parameter to allow me to achieve this with no luck.

Any help or hints are greatly appreciated .

For now I am passing the spark driver the internal Kubernetes Pod IP of juypterhub by setting: “” to str(socket.gethostbyname(socket.gethostname())) and can connect to yarn, but Yarn resource manager is unable to talk back to the spark driver in the single user pod.