This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-01-02 05:53:59 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
compile
History
Charlie Fu
7b2f28deba
[AMD][torch.compile] Enable silu+fp8_quant fusion for rocm (
#18082
)
...
Signed-off-by: charlifu <charlifu@amd.com>
2025-05-13 22:13:56 -07:00
..
piecewise
[Core] Support full cuda graph in v1 (
#16072
)
2025-05-07 22:30:15 -07:00
__init__.py
…
backend.py
…
conftest.py
…
test_basic_correctness.py
…
test_full_graph.py
Improve configs - the rest! (
#17562
)
2025-05-09 15:18:44 -07:00
test_functionalization.py
Improve configs - the rest! (
#17562
)
2025-05-09 15:18:44 -07:00
test_fusion.py
Improve configs - the rest! (
#17562
)
2025-05-09 15:18:44 -07:00
test_pass_manager.py
[Fix] check to make sure processor has chat templates (
#18047
)
2025-05-13 03:04:10 -07:00
test_sequence_parallelism.py
Improve configs - the rest! (
#17562
)
2025-05-09 15:18:44 -07:00
test_silu_mul_quant_fusion.py
[AMD][torch.compile] Enable silu+fp8_quant fusion for rocm (
#18082
)
2025-05-13 22:13:56 -07:00
test_wrapper.py
…