This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-09 20:04:27 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
model_executor
History
youkaichao
4fd9375028
[2/N][torch.compile] make compilation cfg part of vllm cfg (
#10383
)
...
Signed-off-by: youkaichao <youkaichao@gmail.com>
2024-11-16 18:02:14 -08:00
..
__init__.py
[CI/Build] Move
test_utils.py
to
tests/utils.py
(
#4425
)
2024-05-13 23:50:09 +09:00
conftest.py
[Frontend][Core] Move guided decoding params into sampling params (
#8252
)
2024-10-01 09:34:25 +08:00
test_enabled_custom_ops.py
[2/N][torch.compile] make compilation cfg part of vllm cfg (
#10383
)
2024-11-16 18:02:14 -08:00
test_guided_processors.py
[Frontend][Core] Move guided decoding params into sampling params (
#8252
)
2024-10-01 09:34:25 +08:00
test_model_load_with_params.py
Support Roberta embedding models (
#9387
)
2024-11-14 21:23:29 +00:00
weight_utils.py
[Core] Support offline use of local cache for models (
#4374
)
2024-04-27 09:59:55 -07:00