Setting kernel environment variables from within JupyterLab extensions

Hey all,
I built a jupyterlab extension which adds a separate Main Menu extension point (JupyterFrontEndPlugin) which a user can click and server sends a curl request followed by getting a response back. I have the response back from server but trying to set it to an environment variable within the kernel (so that I can use it within the kernel). Does anyone have any pointers on how to make that happen?

Thanks in advance!

@adpatter any chance you have an idea on how I can accomplish this?

Can you use os.environ in the server extension?

I tried it but don’t see the var in notebook kernel. I am trying to access that within my python kernel :slight_smile:
Btw are server extension env variables expected to show up in notebook as well? I thought that wasn’t the case.

What python expression are you using to set the environment variable?

os.environ[“VAR”] = “VALUE”

I just tried it. It seems to be working in my environment.

image

Let me know if you want me to create an example extension in order to demonstrate this.

Oh interesting! Just to confirm if I am doing the same thing, in the example extension - extension-examples/hello-world at master ¡ jupyterlab/extension-examples ¡ GitHub where do you set the env var? In jupyterlab_examples_hello_world/ ?

I set it in the GET handler: extension-examples/handlers.py at dcb1fa62205040308adf9ec9288c10ab1d137709 ¡ jupyterlab/extension-examples ¡ GitHub

In def get I wrote `os.environ[‘TEST’] = ‘TEST’.

Weird, I am doing the same but doesn’t load up in my notebook

This is how I am doing in my handler;

class JupyterHandler(APIHandler):
    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        os.environ["TESTENV"] = "ASD"

    @gen.coroutine
    def post(self):
        resp = <some curl response>
        os.environ['TOKEN'] = resp
        return self.finish(resp)

    @gen.coroutine
    def get(self):
        os.environ["TESTGET"] = "testval"
        self.finish(json.dumps({"data": "This is /jlab-ext-example/hello endpoint!"}))

def _jupyter_server_extension_paths():
    return [{"module": "my_module"}]

def _jupyter_labextension_paths():
    return [{
        "src": "labextension",
        "dest": data["name"]
    }]

def load_jupyter_server_extension(nb_server_app):
    web_app = nb_server_app.web_app
    base_url = web_app.settings['base_url']
    endpoint = url_path_join(base_url, 'custom-endpoint')
    handlers = [(endpoint, JupyterHandler)]
    web_app.add_handlers('.*$', handlers)

Perhaps try printing sys.path in the GET handler and in the Notebook in order to determine if the Notebook kernel is the same as the one executing the GET handler.

@adpatter Looks like the variables are present in the kernel but not directly after triggering the extension. I need to restart the kernel and run the cells again to get the env value (every time I trigger the extension, I need to restart the kernel). I am guessing once I am restarting the kernel it’s picking up the env variables from server.
sys.path looks the same for kernel as well as for the handler.

Wondering if you had to do something similar or they were getting updated as soon as the extension was triggered?

I started jupyter lab at the command prompt and then opened a Notebook and printed the variable. As such, perhaps I was starting the kernel for the Notebook after the extension had started.

If your extension is on GitHub, please send a link and I will take a look at it.

I would not expect to be able to set environment variables from a server-extension and read them in a kernel. This is fragile for a couple of reasons:

  • some user kernels run remotely
  • some users change how local kernels are spawned (e.g. using a pre-populated pool)

Additionally, even kernels launched from the same process do not share the same environment as the parent; child processes only use the parent process environment as a template from which their own (independent) environment is constructed.

If you just want a UI that integrates with a kernel, a “simple” way to do this is to create a custom widget (or use ipywidgets). This effectively permits you to put the business logic in the kernel process, and just have the UI controls operate at the frontend-layer.

If, however, you need / want to integrate with JupyterLab as a frontend extension, then you could look into a client-server model. In this design, the serverextension implements a socket interface that running kernel(s) can talk to. You could similarly use a shared-file to communicate between the server extension and the kernel, but this introduces the need to lock things, which is best avoided unless essential. This will not work for the case that kernels are running on a separate machine, but that might not be a problem.

Without more information about what you’re planning to do here, it’s hard to provide any more detailed suggestions. Perhaps you could expand on what you want to achieve?

Thanks for the response @agoose77 and @adpatter. I will move my extension on Github later today.
Good to know that setting env variables server side and reading them in kernel is not super stable.

Overview of what I am trying to do here - I built a frontend extension (added a separate menu option) which once the user clicks on, sends a request to a separate service and gets a value back. Now I have the value available within my TS code but wanted to make it available within the kernel.

You mentioned the serverextension has a socket interface which the kernel can talk to, is there a documentation for this? Using this I can potentially query serverextension from kernel to get the value back (and don’t have to go through ENV route).

Hey @aish :wave:
Can you give an update on how you solved this issue?
I am also looking into setting up the kernel environment based on the settings of a jupyter lab extension

@pll_llq hey, I ended up using a shared filesystem (you can write to a common path, e.g. ‘/tmp/file.txt’ from within the server which can be read by the kernel as well. Not sure what your use cases is but you might have to take care of locking in case of multiple kernels trying to write stuff and so on

1 Like

Did you write directly from typescript?