This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-11 00:44:57 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
docs
/
source
/
models
History
Zhuohan Li
a9c8212895
[FIX] Add Gemma model to the doc (
#2966
)
2024-02-21 09:46:15 -08:00
..
adding_model.rst
Use NCCL instead of ray for control-plane communication to remove serialization overhead (
#2221
)
2024-01-03 11:30:22 -08:00
engine_args.rst
[Docs] Update documentation for gpu-memory-utilization option (
#2162
)
2023-12-17 10:51:57 -08:00
lora.rst
multi-LoRA as extra models in OpenAI server (
#2775
)
2024-02-17 12:00:48 -08:00
supported_models.rst
[FIX] Add Gemma model to the doc (
#2966
)
2024-02-21 09:46:15 -08:00