This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-25 00:46:01 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
kernels
History
Philipp Moritz
d0d93b92b1
Add unit test for Mixtral MoE layer (
#2677
)
2024-01-31 14:34:17 -08:00
..
conftest.py
Support FP8-E5M2 KV Cache (
#2279
)
2024-01-28 16:43:54 -08:00
test_activation.py
[FIX] Support non-zero CUDA devices in custom kernels (
#1959
)
2024-01-02 19:09:59 -08:00
test_attention.py
Support FP8-E5M2 KV Cache (
#2279
)
2024-01-28 16:43:54 -08:00
test_cache.py
[Minor] Fix test_cache.py CI test failure (
#2684
)
2024-01-31 10:12:11 -08:00
test_layernorm.py
[FIX] Support non-zero CUDA devices in custom kernels (
#1959
)
2024-01-02 19:09:59 -08:00
test_moe.py
Add unit test for Mixtral MoE layer (
#2677
)
2024-01-31 14:34:17 -08:00
test_pos_encoding.py
[FIX] Support non-zero CUDA devices in custom kernels (
#1959
)
2024-01-02 19:09:59 -08:00
test_prefix_prefill.py
Add a 1-line docstring to explain why calling context_attention_fwd twice in test_prefix_prefill.py (
#2553
)
2024-01-22 14:47:25 -08:00