Permission denied when acl is configured

Describe the bug

I have created a github isssue before, but they asked me to create a topic over here. github issue
When opening a session in jupyterhub, I want to create file in a directory where an acl is configured. But I get a permission denied

When I try to create a file in the same directory in my terminal session, it is working.
When I create this file in a local jupyterlab notebook, it is also working

To Reproduce

$ cd tmpetsxn/
tmpetsxn $ getfacl .
# file: .
# owner: jboss
# group: jboss

tmpetsxn $ touch test1
tmpetsxn $ groups $USER
etsxn : users ducktales

Expected behavior
Creating a file when executing touch test1 in jupyterhub

a remarking difference between jupyterhub and a jupyterlab notebook
LOGNAME=root is returned on jupyterhub
LOGNAME=etsxn is returned on a local jupyterlab notebook

Can someone investigate why LOGNAME=root is returned on jupyterhub?

Could you tell us how your JupyterHub is setup. For example, are you installing JupyterHub on a VM, are things running in a container, what authenticators and spawners are you using, what’s your configuration (with secrets redacted), etc.

it is running on a physical server.

my configuration looks like this:

c.ConfigurableHTTPProxy.auth_token = '35172b0b1ea97430f4dcd24c1bda324f9895ef5aebf7a8f4d6d3905eb94b5a43'
c.Spawner.notebook_dir = '$HIDDEN/workspace/{username}/'
c.Authenticator.admin_users = {'eylenbt', 'nolletf', 'shaarba', 'millea6', 'norwoodm'}
c.PAMAuthenticator.service = 'jupyterhub'
c.PAMAuthenticator.open_sessions = False
c.JupyterHub.ssl_cert = '/etc/pki/tls/managed/wildcard/wildcard.pem'
c.JupyterHub.ssl_key = '/etc/pki/tls/managed/wildcard/privkey.pem'
c.JupyterHub.debug_proxy = False
c.ConfigurableHTTPProxy.debug = False
c.Spawner.debug = False
c.LocalProcessSpawner.debug = False
c.ConfigurableHTTPProxy.should_start = False
c.JupyterHub.data_files_path = '/tools/eb/software/JupyterHub/1.5.0-GCCcore-10.2.0/share/jupyterhub'
c.JupyterHub.cleanup_proxy = True
c.Spawner.default_url = '/lab'
c.Spawner.cmd = ['jupyter-labhub']
c.JupyterHub.port = 443
c.JupyterHub.template_paths = ['/etc/jupyterhub/templates/']
c.JupyterHub.redirect_to_server = False

import batchspawner
c.BatchSpawnerBase.req_host = 'by0q4n.$DOMAIN_HIDDEN'
c.JupyterHub.spawner_class = 'wrapspawner.ProfilesSpawner'
c.BatchSpawnerBase.req_runtime = '12:00:00'
c.Spawner.http_timeout = 120

c.ProfilesSpawner.profiles = [
   ( "Local server", 'local', 'jupyterhub.spawner.LocalProcessSpawner', {'ip':''} ),
('BioGrid2 - 2 cores, 4 GB, 10 hours','biogrid2c4g10h', 'batchspawner.GridengineSpawner',
   dict(req_nprocs='2', req_queue='main.regular', req_runtime='10:00:00', req_memory='4G', req_memoryhrss='8G')),
('BioGrid2 - 6 cores, 64 GB, 4 hours','biogrid6c64g4h', 'batchspawner.GridengineSpawner',
   dict(req_nprocs='6', req_queue='main.largemem', req_runtime='4:00:00', req_memory='64G', req_memoryhrss='74G')),
('BioGrid2 - 2 cores, 4 GB, 24 hours','biogrid2c4g24h', 'batchspawner.GridengineSpawner',
   dict(req_nprocs='2', req_queue='main.regular', req_runtime='24:00:00', req_memory='4G', req_memoryhrss='8G')),


c.GridengineSpawner.batch_script =  """#!/bin/bash
#$ -N spawner-jupyterhub
#$ -pe make {nprocs}
#$ -l vf={memory},h_rss={memoryhrss},h_rt={runtime}
#$ -wd /home/{username}/
#$ -hard
#$ -q {queue}
#$ -o {homedir}/workspace/grid-jobs/logs/jupyterhub.sge.out
#$ -e {homedir}/workspace/grid-jobs/logs/jupyterhub.sge.err
#$ -v {keepvars}

. /tools/general/etc/profile.d/
. /tools/general/etc/profile.d/
. /tools/bioinfo/etc/profile.d/
. /tools/bioinfo/etc/profile.d/
ml modules/defaultmns && module load Python/3.8.6-GCCcore-10.2.0 biogrid_R/prod-foss-2020b-R-4.0.3 jupyterlab/3.0.16-foss-2020b JupyterHub/1.5.0-GCCcore-10.2.0 Pandoc/2.13 batchspawner/1.1.1-GCCcore-10.2.0 wrapspawner/1.0.0-GCCcore-10.2.0


Could this be due to your cluster rather than Jupyter? What happens if you create a batch job directly using similar arguments, that outputs your environment variables?

it is not working in both profiles
so it does not matter if I launch it as a local profile or as a batch job.