File save error, invalid response 413

I have a k8s jupyterhub deployment using helm chart 3.3.8, and a user is getting an error popup saying “File Save Error for xxx.ipynb, Invalid response: 413” when saving a notebook with a lot of generated images (the notebook looks to be about 80MB)
I have updated my chart ingress annotation to something like this:

ingress:
  enabled: true
  ingressClassName: nginx
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: 128m

but it’s still happening. Is there anywhere else I should be updating in the chart values?
From the browser network tab, I can see the 413, but it doesn’t look like it’s sending a big payload from the browser, the only payload is the query param.

JupyterLab should probably handle 413 instead of saying “invalid response”, since this means the notebook is too big to save, and isn’t too uncommon. It could recommend clearing output before saving again, to avoid lost work.

What ingress controller are you using (chart and version)? Are you using autohttps (I think you can’t with ingress, so presumably not)?

It’s possible that either your annotation isn’t having the desired effect (unclear), or there’s one or more other proxies also enforcing a limit, or there’s a global limit on the ingress controller that may be overriding the per-ingress setting.

Logs from the ingress controller pod may help identify precisely which layer is raising the error, so you can dig deeper into configuration or logs to figure out what limit is causing the problem.

I am using the helm chart version 3.3.8 with mostly default values, here’s more of the config - should the chp need any modifications? I tried adding debug log to the chp to see if anything jumps out but nothing shows a 413 there, which leads me to think it may still be the ingress. I will reach out to my eks admin to see if there’s anything to glean there (this is deployed on top of aws eks, I believe the ingress is a load balancer in aws but will have to check)

ingress:
  enabled: true
  ingressClassName: nginx
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: 128m
proxy:
  https:
    enabled: false # ingress is handling ssl
    type: secret
  service:
    type: ClusterIP #default LoadBalancer
  chp:
    extraCommandLineFlags:
      - "--log-level=debug"

update: the ingress controller is registry.k8s.io/ingress-nginx/controller:v1.10.0
I do see the 413 from the ingress pod logs

I think I know what the problem was - the notebook was even bigger then 128m, I set the proxy-body-size to 0 and it seems to work now