This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-25 01:55:41 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
kernels
History
Jason Zhu
7a0b011dd5
Add a 1-line docstring to explain why calling context_attention_fwd twice in test_prefix_prefill.py (
#2553
)
2024-01-22 14:47:25 -08:00
..
conftest.py
[FIX] Support non-zero CUDA devices in custom kernels (
#1959
)
2024-01-02 19:09:59 -08:00
test_activation.py
[FIX] Support non-zero CUDA devices in custom kernels (
#1959
)
2024-01-02 19:09:59 -08:00
test_attention.py
[CI] Add Buildkite (
#2355
)
2024-01-14 12:37:58 -08:00
test_cache.py
[CI] Add Buildkite (
#2355
)
2024-01-14 12:37:58 -08:00
test_layernorm.py
[FIX] Support non-zero CUDA devices in custom kernels (
#1959
)
2024-01-02 19:09:59 -08:00
test_pos_encoding.py
[FIX] Support non-zero CUDA devices in custom kernels (
#1959
)
2024-01-02 19:09:59 -08:00
test_prefix_prefill.py
Add a 1-line docstring to explain why calling context_attention_fwd twice in test_prefix_prefill.py (
#2553
)
2024-01-22 14:47:25 -08:00