This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-22 17:45:01 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
v1
/
e2e
History
Harry Mellor
8781cd6b88
Add Eagle and Eagle3 support to Transformers modeling backend (
#30340
)
...
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
2025-12-11 17:02:10 +00:00
..
__init__.py
…
test_async_scheduling.py
[Chore] Fix torch precision warning (
#30428
)
2025-12-11 04:05:56 +00:00
test_async_spec_decode.py
[Attention] Make seq_lens_cpu optional in CommonAttentionMetadata to enable true async spec-decode (
#29624
)
2025-12-09 17:18:10 -08:00
test_cascade_attention.py
…
test_context_length.py
[Bugfix] Fix validate model input for decoder models (
#27099
)
2025-11-13 10:18:47 -08:00
test_correctness_sliding_window.py
[CI][ROCm] Fix test_correctness_sliding_window (
#29243
)
2025-12-02 04:53:27 +00:00
test_kv_sharing_fast_prefill.py
[CI][ROCm][tests/v1/e2e] Fix multiprocessing launch for the test (
#29123
)
2025-12-02 20:46:10 +00:00
test_lora_with_spec_decode.py
[Misc] remove useless v1 env (
#29164
)
2025-11-21 01:41:20 -08:00
test_min_tokens.py
…
test_pooling_chunked_prefill.py
…
test_spec_decode.py
Add Eagle and Eagle3 support to Transformers modeling backend (
#30340
)
2025-12-11 17:02:10 +00:00