Logo
Explore Help
Sign In
xinyun/vllm
1
0
Fork 0
You've already forked vllm
mirror of https://git.datalinker.icu/vllm-project/vllm.git synced 2025-12-14 16:45:37 +08:00
Code Issues Packages Projects Releases Wiki Activity
vllm/docs/source
History
Isotr0py f57092c00b
[Doc] Add oneDNN installation to CPU backend documentation (#8467)
2024-09-13 18:06:30 +00:00
..
_static
[Docs] Add RunLLM chat widget (#6857)
2024-07-27 09:24:46 -07:00
_templates/sections
[Doc] Guide for adding multi-modal plugins (#6205)
2024-07-10 14:55:34 +08:00
assets
[Doc] add visualization for multi-stage dockerfile (#4456)
2024-04-30 17:41:59 +00:00
automatic_prefix_caching
[Doc] Add an automatic prefix caching section in vllm documentation (#5324)
2024-06-11 10:24:59 -07:00
community
Add NVIDIA Meetup slides, announce AMD meetup, and add contact info (#8319)
2024-09-09 23:21:00 -07:00
dev
[misc] [doc] [frontend] LLM torch profiler support (#7943)
2024-09-06 17:48:48 -07:00
getting_started
[Doc] Add oneDNN installation to CPU backend documentation (#8467)
2024-09-13 18:06:30 +00:00
models
[CI/Build] Reorganize models tests (#7820)
2024-09-13 10:20:06 -07:00
performance_benchmark
[Doc] fix 404 link (#7966)
2024-08-28 13:54:23 -07:00
quantization
[Misc] Remove SqueezeLLM (#8220)
2024-09-06 16:29:03 -06:00
serving
[Documentation][Spec Decode] Add documentation about lossless guarantees in Speculative Decoding in vLLM (#7962)
2024-09-05 16:25:29 -04:00
conf.py
[model] Support for Llava-Next-Video model (#7559)
2024-09-10 22:21:36 -07:00
generate_examples.py
Add example scripts to documentation (#4225)
2024-04-22 16:36:54 +00:00
index.rst
[misc] Add Torch profiler support (#7451)
2024-08-21 15:39:26 -07:00
Powered by Gitea Version: 1.23.1 Page: 2064ms Template: 6ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API