What should I do about frozen modules warning?

When I launch JupyterLab (with jupyter lab) I get the warning

0.00s - Debugger warning: It seems that frozen modules are being used, which may
0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off
0.00s - to python to disable frozen modules.
0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation.
  1. I assume that if I’m doing debugging I should do something about this so I don’t “miss breakpoints”, or can I just ignore this (and even stop seeing it by setting PYDEVD_DISABLE_FILE_VALIDATION).
  2. If I should indeed do something about this, how do I configure my virtual environment — or perhaps Jupyter? —so that Xfrozen_modules=off automatically whenever I run jupyter lab?

I’m aware that there are other similar questions but they seem to be old or stuck.

2 Likes

I’m having the same issue.
I’m having issues with a crashing kernel, and this is the only feedback i get in logs. :frowning:

It is likely unrelated to the crash. As answered in the other thread, the warning is innocent. Please open a bug report in appropriate reporistory (you did not mention which kernel it is) with all the details; the most frequen reason for a crash is your system running out of memory, which you can measure using jupyter-resource-usage plugin.

I think you can either ignore it, or contribute a fix adding Xfrozen_modules=off to GitHub - ipython/ipykernel: IPython Kernel for Jupyter where the debugger is implemented, because this is what other IDEs appear to be doing.

Thanks!
I’m an experience developer, but not well versed in the ways of python.
I’m currently using python 3.11.5 on my mac.

I would be surprised if it was a memory issue, but I will definitely look into it!
In my case the problem dissapears when I comment out a few imports (numpy, googlecloud, vertexai).

The cells itself get executed (i put a debug print at the end of each cell), but if I have those libs imported it crashes before a certain cell… suggesting it would be something async/future that I havent yet identified.

Any suggestions?

In my case the problem dissapears when I comment out a few imports (numpy, googlecloud, vertexai).

This is highly suggestive that this is not a Jupyter problem. The usual way to debug issues is by narrowing scope. First, check if the same code works when run in IPython from terminal. Then check if it runs in Python. Then narrow down which import causes the issue. By this time you should at least have reproduction instructions good enough to open a bug report somewhere.

I seem to have figured out how to work around it.
I was running all cells in the notebook.

By running the cells one by one, it seems to be working.
So it could still be a memory issue, although the resource plugin didn’t show anything significant

What does that mean?

1 Like

FYI:

I would appreciate if you could could confirm that adding "-Xfrozen_modules=off", as a second argument of argv list in kernel.json of their kernelspec (run jupyter kernelspec list from terminal to find where these are located) solves the problem and does not introduce any problems.

For example, the change could look like:

{
 "argv": [
  "python",
+ "-Xfrozen_modules=off",
  "-m",
  "ipykernel_launcher",
  "-f",
  "{connection_file}"
 ],
 "display_name": "Python 3 (ipykernel)",
 "language": "python",
 "metadata": {
  "debugger": true
 }
}
1 Like

i tried that but the error still showing

{
 "argv": [
  "/home/irvingl/anaconda3/bin/python",
  "-Xfrozen_modules=off",
  "-m",
  "ipykernel_launcher",
  "-f",
  "{connection_file}"
 ],
 "display_name": "Python 3 (ipykernel)",
 "language": "python",
 "metadata": {
  "debugger": false
 }
}

also i tried with debugger true and false but same error

  1. Do you have the latest version of ipykernel installed? I would recommend 6.29.4 (latest) or newer.
  2. Did you verify that this kernelspec corresponds to the kernelspec that you are testing? You could for exampel modify display_name and see if it changes.
1 Like