Jupyter AI + Internal server

Hi all,

We have the following environment:

bash-5.1$ /opt/python/3.9.14/bin/jupyter --version
Selected Jupyter core packages...
IPython          : 8.18.1
ipykernel        : 6.29.3
ipywidgets       : 8.1.2
jupyter_client   : 7.4.9
jupyter_core     : 5.7.1
jupyter_server   : 2.14.1
jupyterlab       : 4.1.4
nbclient         : 0.9.0
nbconvert        : 7.16.1
nbformat         : 5.9.2
notebook         : 6.5.6
qtconsole        : 5.5.1
traitlets        : 5.14.1

OS: RHEL 9.3

We have a Posit Platform with Jupyter notebook installed. We want to use Jupyter AI with our internal LLM server (model : chatgpt 3.5). I tested the connection with postman and it is working. The steps are the following:

Step 1 - Get the token - Authorization

https: //login .microsoftonline.com/< tenant id >/oauth2/v2.0/authorize?response_type=code&client_id=< client id >&scope=< api scope >&redirect_uri=<redirect_uri>

This returns:

https://redirect_uri?code=< code >&session_state=< session_state >

Then postman send a post:

https://login.microsoftonline.com//oauth2/v2.0/token
Request Body:
grant_type: “authorization_code”
code: “< code generated in the previous step >”
redirect_uri: “<redirect_uri>”
client_id: “<client_id>”
client_secret: “< client secret >”

Step 2 - Get App Context id

POST https://login.microsoftonline.com//oauth2/v2.0/token
Request Body
client_id: “< client id >”
scope: “< api scope >”
client_secret: “< client secret >”
grant_type: “client_credentials”
Response Body
{“token_type”:“Bearer”,“expires_in”:3599,“ext_expires_in”:3599,“access_token”:" < token >"}

Step 3 - Send the message:

POST https://< LLM Server >/api/tryout/v1/public/gpt3/chats/messages
Request Body:
{“messages”:[{“role”:“user”,“content”:“good morning”}],“model”:“gpt3”,“temperature”:0.1}
Response Body:
[{“role”:“assistant”,“content”:“Good morning! How are you today?”,“tokenCount”:17,“tokenLimitExceeded”:false}]

How do I configure Jupyter-Ai to work with that?

File “/home/< user >/.local/lib/python3.9/site-packages/openai/_base_client.py”, line 1599, in _request
raise self._make_status_error_from_response(err.response) from None
openai.APIStatusError: Error code: 405

My Configuration in Jupyter AI:
Language Model: OpenAI::gpt-3.5-turbo
Base API URL: https://< my server >/api/tryout/v1/public/gpt3/chats/messages
API Key: < My Server API key >

Thanks in advance !