Logo
Explore Help
Sign In
xinyun/vllm
1
0
Fork 0
You've already forked vllm
mirror of https://git.datalinker.icu/vllm-project/vllm.git synced 2025-12-22 13:15:01 +08:00
Code Issues Packages Projects Releases Wiki Activity
vllm/docs/source
History
Michael Goin 9ba0bd6aa6
Add lm-eval directly to requirements-test.txt (#9161)
2024-10-08 18:22:31 -07:00
..
_static
[Docs] Add RunLLM chat widget (#6857)
2024-07-27 09:24:46 -07:00
_templates/sections
[Doc] Guide for adding multi-modal plugins (#6205)
2024-07-10 14:55:34 +08:00
assets
[Doc] add visualization for multi-stage dockerfile (#4456)
2024-04-30 17:41:59 +00:00
automatic_prefix_caching
[Doc] Add an automatic prefix caching section in vllm documentation (#5324)
2024-06-11 10:24:59 -07:00
community
Add NVIDIA Meetup slides, announce AMD meetup, and add contact info (#8319)
2024-09-09 23:21:00 -07:00
dev
[Core] renamePromptInputs and inputs (#8876)
2024-09-26 20:35:15 -07:00
getting_started
[Doc] Improve contributing and installation documentation (#9132)
2024-10-08 20:22:08 +00:00
models
[Doc] Update vlm.rst to include an example on videos (#9155)
2024-10-08 18:12:29 +00:00
performance_benchmark
[Doc] fix 404 link (#7966)
2024-08-28 13:54:23 -07:00
quantization
Add lm-eval directly to requirements-test.txt (#9161)
2024-10-08 18:22:31 -07:00
serving
[Doc]: Add deploying_with_k8s guide (#8451)
2024-10-07 13:31:45 -07:00
conf.py
[model] Support for Llava-Next-Video model (#7559)
2024-09-10 22:21:36 -07:00
generate_examples.py
Add example scripts to documentation (#4225)
2024-04-22 16:36:54 +00:00
index.rst
[Doc]: Add deploying_with_k8s guide (#8451)
2024-10-07 13:31:45 -07:00
Powered by Gitea Version: 1.23.1 Page: 658ms Template: 5ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API