TLJH with Cuda GPU Management

Hello everyone,
I just joined this community, and this is my first message here. I am a second-year student studying SE in Serbia. I’ve set up JupyterHub from my Ubuntu server, which runs on 8GPUS. I am looking to restrict and manage GPU resources for the users of JupyterHub.Some schemes would look like this:
-Admin
|- user1: 2 usable GPUs
|- user2: 3 usable GPUs
|- user3: 4 usable GPUs
|- user4: 1 usable GPU

I looked through documentation from links( such as:
-Setting up GPU Data Science Environments for Hackathons | by Jacob Tomlinson | RAPIDS AI | Medium
-https://developer.nvidia.com/blog/cuda-pro-tip-control-gpu-visibility-cuda_visible_devices/

Trust me, my first three pages of google are purple :smiley: ), and I could not find the particular thing that is suitable for me and working. This is what I have so far, and with sudo tljh-config show:

users:
  admin:
  - skynet
  allowed:
  - good-user_1
  - marko
limits:
  memory: 4G
  marko:
    CUDA_VISIBLE_DEVICES: 0,1,2
https:
  enabled: true
user_environment:
  default_app: jupyterhub
marko:
  CUDA_VISIBLE_DEVICES: 0,1,2

Have you experienced a similar problem, and what will you advise me to do? Is it even possible to manage resources with the JupyterHub interface?

Thank you in advance for your time.

I am looking forward to hearing from you!