I am hosting Jupyterhub on an AKS instance, using helm charts and using Azure AD to authenticate the org users. The App Registration on Azure is properly configured, with API Permissions - user.read granted.
However, I am getting a 403: Forbidden error, when signing in using Azure AD credentials.
Which version of Z2JH are you using? Was the hub restarted after you configured debug logging? Can you try and investigate why the debug logs aren’t enabled, as they may contain some useful information on why the login failed.
That’s strange. Please could you paste your full configuration file again (including the debug setting), and also the exact command you’re using to deploy JupyterHub?
# fullnameOverride and nameOverride distinguishes blank strings, null values,
# and non-blank strings. For more details, see the configuration reference.
fullnameOverride: ""
nameOverride:
# custom can contain anything you want to pass to the hub pod, as all passed
# Helm template values will be made available there.
custom: {}
# imagePullSecret is configuration to create a k8s Secret that Helm chart's pods
# can get credentials from to pull their images.
imagePullSecret:
create: false
automaticReferenceInjection: true
registry:
username:
password:
email:
# imagePullSecrets is configuration to reference the k8s Secret resources the
# Helm chart's pods can get credentials from to pull their images.
imagePullSecrets:
- jhub-secret-acr-sp
# hub relates to the hub pod, responsible for running JupyterHub, its configured
# Authenticator class KubeSpawner, and its configured Proxy class
# ConfigurableHTTPProxy. KubeSpawner creates the user pods, and
# ConfigurableHTTPProxy speaks with the actual ConfigurableHTTPProxy server in
# the proxy pod.
hub:
revisionHistoryLimit:
config:
AzureAdOAuthenticator:
enable_auth_state: true
client_id: <REDACTED>
client_secret: <REDACTED>
oauth_callback_url: https://hostname/hub/oauth_callback
tenant_id: <REDACTED>
scope:
- openid
- profile
- email
JupyterHub:
admin_access: false
authenticator_class: azuread
service:
type: ClusterIP
annotations: {}
ports:
nodePort:
extraPorts: []
loadBalancerIP:
baseUrl: /
cookieSecret:
initContainers: []
nodeSelector:
kubernetes.azure.com/agentpool: defaultpool
tolerations: []
concurrentSpawnLimit: 64
consecutiveFailureLimit: 5
activeServerLimit:
deploymentStrategy:
## type: Recreate
## - sqlite-pvc backed hubs require the Recreate deployment strategy as a
## typical PVC storage can only be bound to one pod at the time.
## - JupyterHub isn't designed to support being run in parallell. More work
## needs to be done in JupyterHub itself for a fully highly available (HA)
## deployment of JupyterHub on k8s is to be possible.
type: Recreate
db:
type: sqlite-pvc
upgrade:
pvc:
annotations: {}
selector: {}
accessModes:
- ReadWriteOnce
storage: 1Gi
subPath:
storageClassName:
url:
password:
labels: {}
annotations: {}
command: []
args: []
extraConfig:
auth.py: |
import os
from oauthenticator.azuread import AzureAdOAuthenticator
c.JupyterHub.authenticator_class = AzureAdOAuthenticator
c.Application.log_level = 'DEBUG'
c.AzureAdOAuthenticator.tenant_id = '<REDACTED>'
c.AzureAdOAuthenticator.oauth_callback_url = 'https://hostname/hub/oauth_callback'
c.AzureAdOAuthenticator.client_id = '<REDACTED>'
c.AzureAdOAuthenticator.client_secret = '<REDACTED>'
c.AzureAdOAuthenticator.scope = ['openid', 'profile', 'email']
extraFiles: {}
extraEnv: {}
extraContainers: []
extraVolumes: []
extraVolumeMounts: []
image:
name: jupyterhub/k8s-hub
tag: "2.0.0"
pullPolicy:
pullSecrets: []
resources: {}
podSecurityContext:
fsGroup: 1000
containerSecurityContext:
runAsUser: 1000
runAsGroup: 1000
allowPrivilegeEscalation: false
lifecycle: {}
loadRoles: {}
services: {}
pdb:
enabled: false
maxUnavailable:
minAvailable: 1
networkPolicy:
enabled: true
ingress: []
egress: []
egressAllowRules:
cloudMetadataServer: true
dnsPortsPrivateIPs: true
nonPrivateIPs: true
privateIPs: true
interNamespaceAccessLabels: ignore
allowedIngressPorts: []
allowNamedServers: false
namedServerLimitPerUser:
authenticatePrometheus:
redirectToServer:
shutdownOnLogout:
templatePaths: []
templateVars: {}
livenessProbe:
# The livenessProbe's aim to give JupyterHub sufficient time to startup but
# be able to restart if it becomes unresponsive for ~5 min.
enabled: true
initialDelaySeconds: 300
periodSeconds: 10
failureThreshold: 30
timeoutSeconds: 3
readinessProbe:
# The readinessProbe's aim is to provide a successful startup indication,
# but following that never become unready before its livenessProbe fail and
# restarts it if needed. To become unready following startup serves no
# purpose as there are no other pod to fallback to in our non-HA deployment.
enabled: true
initialDelaySeconds: 0
periodSeconds: 2
failureThreshold: 1000
timeoutSeconds: 1
existingSecret:
serviceAccount:
create: true
name:
annotations: {}
extraPodSpec: {}
rbac:
create: true
# proxy relates to the proxy pod, the proxy-public service, and the autohttps
# pod and proxy-http service.
proxy:
secretToken:
annotations: {}
deploymentStrategy:
## type: Recreate
## - JupyterHub's interaction with the CHP proxy becomes a lot more robust
## with this configuration. To understand this, consider that JupyterHub
## during startup will interact a lot with the k8s service to reach a
## ready proxy pod. If the hub pod during a helm upgrade is restarting
## directly while the proxy pod is making a rolling upgrade, the hub pod
## could end up running a sequence of interactions with the old proxy pod
## and finishing up the sequence of interactions with the new proxy pod.
## As CHP proxy pods carry individual state this is very error prone. One
## outcome when not using Recreate as a strategy has been that user pods
## have been deleted by the hub pod because it considered them unreachable
## as it only configured the old proxy pod but not the new before trying
## to reach them.
type: Recreate
## rollingUpdate:
## - WARNING:
## This is required to be set explicitly blank! Without it being
## explicitly blank, k8s will let eventual old values under rollingUpdate
## remain and then the Deployment becomes invalid and a helm upgrade would
## fail with an error like this:
##
## UPGRADE FAILED
## Error: Deployment.apps "proxy" is invalid: spec.strategy.rollingUpdate: Forbidden: may not be specified when strategy `type` is 'Recreate'
## Error: UPGRADE FAILED: Deployment.apps "proxy" is invalid: spec.strategy.rollingUpdate: Forbidden: may not be specified when strategy `type` is 'Recreate'
rollingUpdate:
# service relates to the proxy-public service
service:
type: ClusterIP
labels: {}
annotations: {}
nodePorts:
http:
https:
disableHttpPort: false
extraPorts: []
loadBalancerIP:
loadBalancerSourceRanges: []
# chp relates to the proxy pod, which is responsible for routing traffic based
# on dynamic configuration sent from JupyterHub to CHP's REST API.
chp:
revisionHistoryLimit:
containerSecurityContext:
runAsUser: 65534 # nobody user
runAsGroup: 65534 # nobody group
allowPrivilegeEscalation: false
image:
name: jupyterhub/configurable-http-proxy
# tag is automatically bumped to new patch versions by the
# watch-dependencies.yaml workflow.
#
tag: "4.5.3" # https://github.com/jupyterhub/configurable-http-proxy/releases
pullPolicy:
pullSecrets: []
extraCommandLineFlags: []
livenessProbe:
enabled: true
initialDelaySeconds: 60
periodSeconds: 10
failureThreshold: 30
timeoutSeconds: 3
readinessProbe:
enabled: true
initialDelaySeconds: 0
periodSeconds: 2
failureThreshold: 1000
timeoutSeconds: 1
resources: {}
defaultTarget:
errorTarget:
extraEnv: {}
nodeSelector:
kubernetes.azure.com/agentpool: defaultpool
tolerations: []
networkPolicy:
enabled: true
ingress: []
egress: []
egressAllowRules:
cloudMetadataServer: true
dnsPortsPrivateIPs: true
nonPrivateIPs: true
privateIPs: true
interNamespaceAccessLabels: ignore
allowedIngressPorts: [http, https]
pdb:
enabled: false
maxUnavailable:
minAvailable: 1
extraPodSpec: {}
# traefik relates to the autohttps pod, which is responsible for TLS
# termination when proxy.https.type=letsencrypt.
traefik:
revisionHistoryLimit:
containerSecurityContext:
runAsUser: 65534 # nobody user
runAsGroup: 65534 # nobody group
allowPrivilegeEscalation: false
image:
name: traefik
# tag is automatically bumped to new patch versions by the
# watch-dependencies.yaml workflow.
#
tag: "v2.8.4" # ref: https://hub.docker.com/_/traefik?tab=tags
pullPolicy:
pullSecrets: []
hsts:
includeSubdomains: false
preload: false
maxAge: 15724800 # About 6 months
resources: {}
labels: {}
extraInitContainers: []
extraEnv: {}
extraVolumes: []
extraVolumeMounts: []
extraStaticConfig: {}
extraDynamicConfig: {}
nodeSelector:
kubernetes.azure.com/agentpool: defaultpool
tolerations: []
extraPorts: []
networkPolicy:
enabled: true
ingress: []
egress: []
egressAllowRules:
cloudMetadataServer: true
dnsPortsPrivateIPs: true
nonPrivateIPs: true
privateIPs: true
interNamespaceAccessLabels: ignore
allowedIngressPorts: [http, https]
pdb:
enabled: false
maxUnavailable:
minAvailable: 1
serviceAccount:
create: true
name:
annotations: {}
extraPodSpec: {}
secretSync:
containerSecurityContext:
runAsUser: 65534 # nobody user
runAsGroup: 65534 # nobody group
allowPrivilegeEscalation: false
image:
name: jupyterhub/k8s-secret-sync
tag: "2.0.0"
pullPolicy:
pullSecrets: []
resources: {}
labels: {}
https:
enabled: false
type: letsencrypt
#type: letsencrypt, manual, offload, secret
letsencrypt:
contactEmail:
# Specify custom server here (https://acme-staging-v02.api.letsencrypt.org/directory) to hit staging LE
acmeServer: https://acme-v02.api.letsencrypt.org/directory
manual:
key:
cert:
secret:
name:
key: tls.key
crt: tls.crt
hosts: []
# singleuser relates to the configuration of KubeSpawner which runs in the hub
# pod, and its spawning of user pods such as jupyter-myusername.
singleuser:
podNameTemplate:
extraTolerations: []
nodeSelector:
kubernetes.azure.com/agentpool: defaultpool
extraNodeAffinity:
required: []
preferred: []
extraPodAffinity:
required: []
preferred: []
extraPodAntiAffinity:
required: []
preferred: []
networkTools:
image:
name: jupyterhub/k8s-network-tools
tag: "2.0.0"
pullPolicy:
pullSecrets: []
resources: {}
cloudMetadata:
# block set to true will append a privileged initContainer using the
# iptables to block the sensitive metadata server at the provided ip.
blockWithIptables: true
ip: 169.254.169.254
networkPolicy:
enabled: true
ingress: []
egress: []
egressAllowRules:
cloudMetadataServer: false
dnsPortsPrivateIPs: true
nonPrivateIPs: true
privateIPs: false
interNamespaceAccessLabels: ignore
allowedIngressPorts: []
events: true
extraAnnotations: {}
extraLabels:
hub.jupyter.org/network-access-hub: "true"
extraFiles: {}
extraEnv: {}
lifecycleHooks: {}
initContainers: []
extraContainers: []
allowPrivilegeEscalation: false
uid: 1000
fsGid: 100
serviceAccountName:
storage:
type: dynamic
extraLabels: {}
extraVolumes: []
extraVolumeMounts: []
static:
pvcName:
subPath: "{username}"
capacity: 10Gi
homeMountPath: /home/jovyan
dynamic:
storageClass:
pvcNameTemplate: claim-{username}{servername}
volumeNameTemplate: volume-{username}{servername}
storageAccessModes: [ReadWriteOnce]
image:
name: privatecr.io/jupyter/minimal-notebook
tag: "latest"
pullPolicy:
pullSecrets:
- jhub-secret-acr-sp
startTimeout: 300
cpu:
limit:
guarantee:
memory:
limit:
guarantee: 1G
extraResource:
limits: {}
guarantees: {}
cmd: jupyterhub-singleuser
defaultUrl:
extraPodConfig: {}
profileList: []
# scheduling relates to the user-scheduler pods and user-placeholder pods.
scheduling:
userScheduler:
enabled: true
revisionHistoryLimit:
replicas: 2
logLevel: 4
# plugins are configured on the user-scheduler to make us score how we
# schedule user pods in a way to help us schedule on the most busy node. By
# doing this, we help scale down more effectively. It isn't obvious how to
# enable/disable scoring plugins, and configure them, to accomplish this.
#
# plugins ref: https://kubernetes.io/docs/reference/scheduling/config/#scheduling-plugins-1
# migration ref: https://kubernetes.io/docs/reference/scheduling/config/#scheduler-configuration-migrations
#
plugins:
score:
# These scoring plugins are enabled by default according to
# https://kubernetes.io/docs/reference/scheduling/config/#scheduling-plugins
# 2022-02-22.
#
# Enabled with high priority:
# - NodeAffinity
# - InterPodAffinity
# - NodeResourcesFit
# - ImageLocality
# Remains enabled with low default priority:
# - TaintToleration
# - PodTopologySpread
# - VolumeBinding
# Disabled for scoring:
# - NodeResourcesBalancedAllocation
#
disabled:
# We disable these plugins (with regards to scoring) to not interfere
# or complicate our use of NodeResourcesFit.
- name: NodeResourcesBalancedAllocation
# Disable plugins to be allowed to enable them again with a different
# weight and avoid an error.
- name: NodeAffinity
- name: InterPodAffinity
- name: NodeResourcesFit
- name: ImageLocality
enabled:
- name: NodeAffinity
weight: 14631
- name: InterPodAffinity
weight: 1331
- name: NodeResourcesFit
weight: 121
- name: ImageLocality
weight: 11
pluginConfig:
# Here we declare that we should optimize pods to fit based on a
# MostAllocated strategy instead of the default LeastAllocated.
- name: NodeResourcesFit
args:
scoringStrategy:
resources:
- name: cpu
weight: 1
- name: memory
weight: 1
type: MostAllocated
containerSecurityContext:
runAsUser: 65534 # nobody user
runAsGroup: 65534 # nobody group
allowPrivilegeEscalation: false
image:
# IMPORTANT: Bumping the minor version of this binary should go hand in
# hand with an inspection of the user-scheduelrs RBAC resources
# that we have forked in
# templates/scheduling/user-scheduler/rbac.yaml.
#
# Debugging advice:
#
# - Is configuration of kube-scheduler broken in
# templates/scheduling/user-scheduler/configmap.yaml?
#
# - Is the kube-scheduler binary's compatibility to work
# against a k8s api-server that is too new or too old?
#
# - You can update the GitHub workflow that runs tests to
# include "deploy/user-scheduler" in the k8s namespace report
# and reduce the user-scheduler deployments replicas to 1 in
# dev-config.yaml to get relevant logs from the user-scheduler
# pods. Inspect the "Kubernetes namespace report" action!
#
# - Typical failures are that kube-scheduler fails to search for
# resources via its "informers", and won't start trying to
# schedule pods before they succeed which may require
# additional RBAC permissions or that the k8s api-server is
# aware of the resources.
#
# - If "successfully acquired lease" can be seen in the logs, it
# is a good sign kube-scheduler is ready to schedule pods.
#
name: k8s.gcr.io/kube-scheduler
# tag is automatically bumped to new patch versions by the
# watch-dependencies.yaml workflow. The minor version is pinned in the
# workflow, and should be updated there if a minor version bump is done
# here.
#
tag: "v1.23.10" # ref: https://github.com/kubernetes/website/blob/eb2694c8bcc62a1337d8a02f449d38f4947a3ea9/content/en/releases/patch-releases.md
pullPolicy:
pullSecrets: []
nodeSelector:
kubernetes.azure.com/agentpool: defaultpool
tolerations: []
labels: {}
annotations: {}
pdb:
enabled: true
maxUnavailable: 1
minAvailable:
resources: {}
serviceAccount:
create: true
name:
annotations: {}
extraPodSpec: {}
podPriority:
enabled: false
globalDefault: false
defaultPriority: 0
imagePullerPriority: -5
userPlaceholderPriority: -10
userPlaceholder:
enabled: true
image:
name: k8s.gcr.io/pause
# tag is automatically bumped to new patch versions by the
# watch-dependencies.yaml workflow.
#
# If you update this, also update prePuller.pause.image.tag
#
tag: "3.8"
pullPolicy:
pullSecrets: []
revisionHistoryLimit:
replicas: 0
labels: {}
annotations: {}
containerSecurityContext:
runAsUser: 65534 # nobody user
runAsGroup: 65534 # nobody group
allowPrivilegeEscalation: false
resources: {}
corePods:
tolerations:
- key: hub.jupyter.org/dedicated
operator: Equal
value: core
effect: NoSchedule
- key: hub.jupyter.org_dedicated
operator: Equal
value: core
effect: NoSchedule
nodeAffinity:
matchNodePurpose: prefer
userPods:
tolerations:
- key: hub.jupyter.org/dedicated
operator: Equal
value: user
effect: NoSchedule
- key: hub.jupyter.org_dedicated
operator: Equal
value: user
effect: NoSchedule
nodeAffinity:
matchNodePurpose: prefer
# prePuller relates to the hook|continuous-image-puller DaemonsSets
prePuller:
revisionHistoryLimit:
labels: {}
annotations: {}
resources: {}
containerSecurityContext:
runAsUser: 65534 # nobody user
runAsGroup: 65534 # nobody group
allowPrivilegeEscalation: false
extraTolerations: []
# hook relates to the hook-image-awaiter Job and hook-image-puller DaemonSet
hook:
enabled: true
pullOnlyOnChanges: true
# image and the configuration below relates to the hook-image-awaiter Job
image:
name: jupyterhub/k8s-image-awaiter
tag: "2.0.0"
pullPolicy:
pullSecrets: []
containerSecurityContext:
runAsUser: 65534 # nobody user
runAsGroup: 65534 # nobody group
allowPrivilegeEscalation: false
podSchedulingWaitDuration: 10
nodeSelector:
kubernetes.azure.com/agentpool: defaultpool
tolerations: []
resources: {}
serviceAccount:
create: true
name:
annotations: {}
continuous:
enabled: true
pullProfileListImages: true
extraImages: {}
pause:
containerSecurityContext:
runAsUser: 65534 # nobody user
runAsGroup: 65534 # nobody group
allowPrivilegeEscalation: false
image:
name: k8s.gcr.io/pause
# tag is automatically bumped to new patch versions by the
# watch-dependencies.yaml workflow.
#
# If you update this, also update scheduling.userPlaceholder.image.tag
#
tag: "3.8"
pullPolicy:
pullSecrets: []
ingress:
enabled: false
annotations: {}
ingressClassName:
hosts: []
pathSuffix:
pathType: Prefix
tls: []
# cull relates to the jupyterhub-idle-culler service, responsible for evicting
# inactive singleuser pods.
#
# The configuration below, except for enabled, corresponds to command-line flags
# for jupyterhub-idle-culler as documented here:
# https://github.com/jupyterhub/jupyterhub-idle-culler#as-a-standalone-script
#
cull:
enabled: true
users: false # --cull-users
adminUsers: true # --cull-admin-users
removeNamedServers: false # --remove-named-servers
timeout: 3600 # --timeout
every: 600 # --cull-every
concurrency: 10 # --concurrency
maxAge: 0 # --max-age
debug:
enabled: true
global:
safeToShowValues: false
When using the helm chart, you should not copy paste the default values and then change them.
Instead, you should provide only what you want to change in your own non-default-values file.
Using a copy of both is a big risk, i dont want to consider what could be wrong until we can rule out something unexpected/hard-to-debug goes wrong because of that.
Thanks for the suggestion. As a trial, I deleted the current helm installation and re installed the same - this time as you suggested with only the custom/non-default values.yaml. Unfortunately the error still remains, and the logs are still the same as mentioned above.
This is an important error message - it means that AD auth is succeeding, but it’s producing a string that’s an illegal username for jupyterhub (e.g. containing a / or starting/ending with space). Can you share a less redacted username that shows some of its structure (e.g. replace sequences of letters and numbers with xyz, but leave other characters alone)?
I’m running into this issue now, at a much later date. As far as I can tell, I’ve matched as expected and setup the azure app correctly. Wondering if there’s any additional thoughts?
When trying to login:
403 : Forbidden
Sorry, you are not currently authorized to use this hub. Please contact the hub administrator.
config.yaml:
[I 2023-08-03 22:18:43.037 JupyterHub log:191] 302 GET / → /hub/ (@::ffff:xxx.xx.xxx.xxx) 0.62ms
[I 2023-08-03 22:18:43.305 JupyterHub log:191] 302 GET /hub/ → /hub/login?next=%2Fhub%2F (@::ffff:xxx.xx.xxx.xxx) 0.67ms
[I 2023-08-03 22:18:43.573 JupyterHub log:191] 302 GET /hub/login?next=%2Fhub%2F → /hub/oauth_login?next=%2Fhub%2F (@::ffff:xxx.xx.xxx.xxx) 0.98ms
[I 2023-08-03 22:18:43.844 JupyterHub oauth2:102] OAuth redirect: https://…/hub/oauth_callback
[D 2023-08-03 22:18:43.845 JupyterHub base:585] Setting cookie oauthenticator-state: {‘httponly’: True, ‘expires_days’: 1}
[I 2023-08-03 22:18:43.845 JupyterHub log:191] 302 GET /hub/oauth_login?next=%2Fhub%2F → https:// login .microsoftonline. com/…/oauth2/authorize?response_type=code&redirect_uri=https%3A%2F%2F…%2Fhub%2Foauth_callback&client_id=…&state=[secret]&scope=openid+profile+email (@::ffff:xxx.xx.xxx.xxx) 1.48ms
[D 2023-08-03 22:18:44.366 JupyterHub log:191] 200 GET /hub/health (@xxx.xx.xxx.xxx) 0.67ms
[W 2023-08-03 22:18:44.603 JupyterHub auth:533] User ‘user@domain.com’ not allowed.
[W 2023-08-03 22:18:44.604 JupyterHub base:841] Failed login for unknown user
[W 2023-08-03 22:18:44.604 JupyterHub web:1869] 403 GET /hub/oauth_callback?code=…&state=…&session_state=… (::ffff:xxx.xx.xxx.xxx): Sorry, you are not currently authorized to use this hub. Please contact the hub administrator.
[D 2023-08-03 22:18:44.604 JupyterHub base:1369] No template for 403
[W 2023-08-03 22:18:44.625 JupyterHub log:191] 403 GET /hub/oauth_callback?code=[secret]&state=[secret]&session_state=[secret] (@::ffff:xxx.xx.xxx.xxx) 258.27ms
If I remove the suggestions here ( enable_auth_state / username_claim ) it appears the app is using first last names.
[W 2023-08-03 22:37:43.464 JupyterHub auth:533] User ‘first_name last_name’ not allowed.
[W 2023-08-03 22:37:43.464 JupyterHub base:841] Failed login for unknown user
If I add this to admin_users, I’m able to access as expected.
admin_users:
- first_name last_name
This also applies to adding the values suggested here ( enable_auth_state / username_claim ) and the email address to admin_users.
So in theory it’s only allowing users who can be verified via azuread, who are explicitly set as admin; which is obviously not practical for normal users.