Yang Fan 2c1bd848a6
[Model][VLM] Add Qwen2.5-Omni model support (thinker only) (#15130)
Signed-off-by: fyabc <suyang.fy@alibaba-inc.com>
Signed-off-by: Roger Wang <ywang@roblox.com>
Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>
Co-authored-by: Roger Wang <ywang@roblox.com>
Co-authored-by: Xiong Wang <wangxiongts@163.com>
2025-04-18 23:14:36 -07:00
..
2023-05-22 17:02:44 -07:00
2025-03-29 04:27:22 +00:00

vLLM documents

Build the docs

  • Make sure in docs directory
cd docs
  • Install the dependencies:
pip install -r ../requirements/docs.txt
  • Clean the previous build (optional but recommended):
make clean
  • Generate the HTML documentation:
make html

Open the docs with your browser

  • Serve the documentation locally:
python -m http.server -d build/html/

This will start a local server at http://localhost:8000. You can now open your browser and view the documentation.

If port 8000 is already in use, you can specify a different port, for example:

python -m http.server 3000 -d build/html/