Shiny for Python 1.0 launched this week with built-in chatbot functionality. The Chat() component is aimed at making it “easy to implement generative AI chatbots, powered by any large language model (LLM) of your choosing,” according to today’s announcement. “The ai_model can be anything, but Chat makes it especially easy to use interfaces from OpenAI, Anthropic, Google, LangChain, and Ollama.”
Shiny 1.0 can be installed with the Python package manager of your choice, such as
pip install shiny
There are several ways to implement the LLM back end in a Shiny Python app, but the Shiny creators at Posit recommend starting with LangChain in order to “standardize response generation across different LLMs.”
The release comes with a suggested quickstart template as well as templates for model providers including Anthropic, Gemini, Ollama, and OpenAI. All of these templates are available at GitHub.
Make sure to include an API key if needed in a .env file for providers that need them. More info and some retrieval-augmented generation (RAG) recipes are available at the project’s chat examples page on GitHub.
Shiny for Python 1.0 also includes an end-to-end testing framework built around Playwright, two components for rendering data frames, and a styles argument for styling rendered data frames.
Shiny for Python adds chat component for generative AI chatbots | InfoWorldRead More