diff --git a/docs/deployment/frameworks/streamlit.md b/docs/deployment/frameworks/streamlit.md index af0f0690c68e2..c119878f137a4 100644 --- a/docs/deployment/frameworks/streamlit.md +++ b/docs/deployment/frameworks/streamlit.md @@ -6,35 +6,33 @@ It can be quickly integrated with vLLM as a backend API server, enabling powerfu ## Prerequisites -- Setup vLLM environment +Set up the vLLM environment by installing all required packages: + +```bash +pip install vllm streamlit openai +``` ## Deploy -- Start the vLLM server with the supported chat completion model, e.g. +1. Start the vLLM server with a supported chat completion model, e.g. -```bash -vllm serve qwen/Qwen1.5-0.5B-Chat -``` + ```bash + vllm serve Qwen/Qwen1.5-0.5B-Chat + ``` -- Install streamlit and openai: +1. Use the script: -```bash -pip install streamlit openai -``` +1. Start the streamlit web UI and start to chat: -- Use the script: - -- Start the streamlit web UI and start to chat: - -```bash -streamlit run streamlit_openai_chatbot_webserver.py - -# or specify the VLLM_API_BASE or VLLM_API_KEY -VLLM_API_BASE="http://vllm-server-host:vllm-server-port/v1" \ + ```bash streamlit run streamlit_openai_chatbot_webserver.py -# start with debug mode to view more details -streamlit run streamlit_openai_chatbot_webserver.py --logger.level=debug -``` + # or specify the VLLM_API_BASE or VLLM_API_KEY + VLLM_API_BASE="http://vllm-server-host:vllm-server-port/v1" \ + streamlit run streamlit_openai_chatbot_webserver.py -![](../../assets/deployment/streamlit-chat.png) + # start with debug mode to view more details + streamlit run streamlit_openai_chatbot_webserver.py --logger.level=debug + ``` + + ![Chat with vLLM assistant in Streamlit](../../assets/deployment/streamlit-chat.png)