Widget fails to display response from LLM

I am running a test from phidata module to check the response from LLM using interactive mode in vs code:


from phi.agent import Agent
from phi.model.groq import Groq 
from dotenv import load_dotenv

load_dotenv()

agent = Agent(
    model = Groq(id = "llama-3.3-70b-versatile")
)

agent.print_response("Describe briefly how checks and balances works in US. politics")

The output from terminal looks good, which gives a short paragraph.
But here’s the output from interactive mode:

Does anyone know how to fix it? Thanks!