This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-09 22:55:51 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
v1
/
e2e
History
22quinn
807d21b80d
[BugFix] [Spec Decode] Remove LlamaForCausalLMEagle3 to fix CI (
#22611
)
...
Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com>
2025-08-11 10:31:36 -07:00
..
__init__.py
[V1] Implement Cascade Attention (
#11635
)
2025-01-01 21:56:46 +09:00
test_cascade_attention.py
[XPU] Use spawn with XPU multiprocessing (
#20649
)
2025-07-09 00:34:28 -07:00
test_correctness_sliding_window.py
[KVCache] Make KVCacheSpec hashable (
#21791
)
2025-07-29 19:58:29 +08:00
test_kv_sharing_fast_prefill.py
Fix test_kv_sharing_fast_prefill flakiness (
#22038
)
2025-08-01 23:55:34 -07:00
test_spec_decode.py
[BugFix] [Spec Decode] Remove LlamaForCausalLMEagle3 to fix CI (
#22611
)
2025-08-11 10:31:36 -07:00