Testing JupyterAI RootChatHandler

Hey, I wanted to add a quick pytest test case that tests if I am able to query a model from a specific provider (in my case, Bedrock), and I wasn’t sure the best way to do this. I saw some test cases with mock LLMs (in test_llms.py). Here is what I have so far:

KNOWN_LM_A = "bedrock"

@pytest.mark.parametrize(
    "argv",
    [
        ["--AiExtension.allowed_providers", KNOWN_LM_A],
    ],
)
def test_root_chat_handler(argv, jp_configurable_serverapp):    
    server = jp_configurable_serverapp(argv=argv)
    ai = AiExtension()
    ai._link_jupyter_server_extension(server)

    msg = {"prompt":"What's up doc?"}

    print(ai.handlers[2][1])
    
    assert True

This prints out the RootChatHandler but it’s not instantiated yet of course. I am not sure if I’m even approaching this the right way. Any guidance would be much appreciated :slight_smile: