I have deploy jupyterhub for k8s on 3 nodes,and how do I set up the specific UID and GID when the pod start?
For example:I got a local user Max, whose UID is 2009 and GID is 800, how do I set the config to map these UID and GID to the default jovyan(1000:100)?and Which authentication should I use to locate the use on local machine?
I’ve got an example configuration with LDAP in
Which authentication should I use to locate the use on local machine?
Since you’re using K8s local users/authentication can’t be used, you’ll either need to use an external (remote) authenticator such as OAuthenticator, or let JupyterHub manage all your users such as with GitHub - jupyterhub/nativeauthenticator: JupyterHub-native User Authenticator
though in this case you won’t need to modify UIDs/GIDs.
If I generate the same users of uid and gid inside the jhub pod, could I use the PAM ?since I have many files that have different owner UID,and I want the user who spawn a new pod that can access their own local file,instead of everyone is jovyan,and share all files
In theory you could write a custom authenticator and configure the spawner to work with local users and groups, but it’ll require some experimentation since Kubernetes assumes everything is running in containers. How are you configuring users and groups on your hosts- are you using an external directory service?
I just setup same uid and gid on different hosts, not using external directory service,only the local setting.
Manics, thank you so much.
I also want to use LDAP to have the same UIDs/GIDs for user accounts in JupyterHub.
The “LDAP with modified username and Git author in singleuser server” that you shared works perfectly with the “zoidberg” user on the test-openldap server you mentioned.
However, when trying to log in to JupyterHub using my own LDAP server, I encounter the following error: “ldap3.core.exceptions.LDAPStartTLSError: (‘wrap socket error: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:1129)’,)” .
It seems like there might be an issue with the configuration of my LDAP server. Could you please help me identify the problem?
Additionally, I have one more request. Could you share the password for the admin account on your test-openldap server? I’d like to perform various tests using your test-openldap instead of my LDAP server.
Here are my LDAP user information and JupyterHub values.yml:
[root@badmin01 ~]# ldapsearch -x -b “dc=kdw,dc=bio” -LLL
:
dn: uid=k092a01,ou=People,dc=kdw,dc=bio
objectClass: top
objectClass: account
objectClass: posixAccount
objectClass: shadowAccount
cn: k092a01
uid: k092a01
uidNumber: 1000687
gidNumber: 1000592
homeDirectory: /home01/k092a01
loginShell: /bin/bash
gecos: k092a01 [Openldap Test k092a01]
shadowMin: 0
shadowMax: 99999
shadowWarning: 7
shadowLastChange: 19668
:
[root@bkb-cont1 2.0.0]# cat jupyterhub.yml
hub:
config:
# JupyterHub:
# authenticator_class: ldapauthenticator.LDAPAuthenticator
Authenticator:
enable_auth_state: true
LDAPAuthenticator:
# See GitHub - rroemhild/docker-test-openldap: Docker OpenLDAP Server for testing LDAP applications
# for users
server_address: 172.22.1.95
lookup_dn: True
bind_dn_template: “cn={username},ou=People,dc=kdw,dc=bio”
user_search_base: “ou=People,dc=kdw,dc=bio”
user_attribute: uid
lookup_dn_user_dn_attribute: cn
escape_userdn: True
#auth_state_attributes: [“uid”, “cn”, “mail”, “ou”]
auth_state_attributes: [“uid”, “cn”, “ou”]
use_lookup_dn_username: False
extraConfig:
SpawnerCustomConfig: |
from ldapauthenticator import LDAPAuthenticator
from hashlib import md5
class LDAPAuthenticatorInfo(LDAPAuthenticator):
async def pre_spawn_start(self, user, spawner):
auth_state = await user.get_auth_state()
self.log.debug(f"pre_spawn_start auth_state: {auth_state}")
if not auth_state:
return
# Setup environment variables to pass to singleuser server
# The test server doesn't have numeric UIDs, so create one by hashing uid
spawner.environment["NB_UID"] = str(
int(md5(auth_state["uid"][0].encode()).hexdigest(), 16) % 32768 + 1001)
spawner.environment["NB_USER"] = auth_state["uid"][0]
spawner.environment["GIT_AUTHOR_NAME"] = auth_state["cn"][0]
spawner.environment["GIT_COMMITTER_NAME"] = auth_state["cn"][0]
#spawner.environment["GIT_AUTHOR_EMAIL"] = auth_state["mail"][0]
#spawner.environment["GIT_COMMITTER_EMAIL"] = auth_state["mail"][0]
c.JupyterHub.authenticator_class = LDAPAuthenticatorInfo
CustomHubConfig: |
c.JupyterHub.cleanup_servers = True
proxy:
service:
type: NodePort
singleuser:
image:
name: docker.io/jupyter/base-notebook
tag: latest
default.
cmd:
uid: 0
storage:
# Mount persistent volume at correct home
homeMountPath: /home/{username}
#ingress:
#enabled: true
#hosts:
#- %K8S_HOSTNAME%
debug:
enabled: true
I’ve got very little experience with LDAP, I try and avoid it due to it’s complexity
The Helm chart is just a wrapper around this image: Docker
Source: GitHub - rroemhild/docker-test-openldap: Docker OpenLDAP Server for testing LDAP applications
I think the password is GoodNewsEveryone
Otherwise there are lots of LDIF files in that repo, so you can probably build your own custom LDAP image and modify
helm-test-openldap/test-openldap/values.yaml at b36d8e930e183ba501be5dad9ad1ed227e8c4e1b · manics/helm-test-openldap · GitHub
Mr manics.
I’m very grateful. I think your help will be of great help to me.
Hello,
Does this method works for all notebook distributions? For example I tried this solution for bitnami helm chart but does not seem work as expected. It returns in the if condition.
Thank you,
Kind Regards.
By the way I tried the other hook points for example I installed sudo and some other configurations in my images but in the hooks hook points (auth_state_hook(), pre_spawn_start()) but all of them throws exception like “sudo command not found etc” it seems those hook points run before pod created with my image and in other image.
Is there any documentation for those details?
auth_state_hook
and pre_spawn_start
are Python hooks that run in the hub, not in the singleuser environment. They’re intended to perform backend tasks, or to customise the environment (e.g. by setting environment variables), and they run before the singleuser server is created.
The Bitnami chart is independent of Z2JH, so you’ll need to refer to the Bitnami documentation.