This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-30 00:51:51 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
kernels
History
youkaichao
124776ebd5
[ci] skip failed tests for flashinfer (
#13352
)
...
Signed-off-by: youkaichao <youkaichao@gmail.com>
2025-02-16 22:09:15 +08:00
..
__init__.py
…
allclose_default.py
…
conftest.py
…
quant_utils.py
…
test_activation.py
…
test_aqlm.py
…
test_attention_selector.py
…
test_attention.py
…
test_awq_marlin.py
…
test_awq_triton.py
…
test_awq.py
…
test_block_fp8.py
…
test_blocksparse_attention.py
…
test_cache.py
[Perf] Mem align KV caches for CUDA devices (MLA perf improvement) (
#12676
)
2025-02-04 18:22:24 -08:00
test_cascade_flash_attn.py
…
test_causal_conv1d.py
…
test_cutlass_2of4_sparse.py
[Kernel][Bugfix] Refactor and Fix CUTLASS 2:4 Sparse Kernels (
#13198
)
2025-02-14 00:01:14 +00:00
test_cutlass.py
…
test_encoder_decoder_attn.py
…
test_flash_attn.py
…
test_flashinfer.py
[ci] skip failed tests for flashinfer (
#13352
)
2025-02-16 22:09:15 +08:00
test_fp8_quant.py
…
test_fused_quant_layernorm.py
…
test_ggml.py
…
test_gguf.py
…
test_gptq.py
…
test_int8_quant.py
…
test_layernorm.py
…
test_machete_mm.py
…
test_mamba_mixer2.py
Add Bamba Model (
#10909
)
2025-02-06 15:22:42 -08:00
test_mamba_ssm_ssd.py
Add Bamba Model (
#10909
)
2025-02-06 15:22:42 -08:00
test_mamba_ssm.py
…
test_marlin_gemm.py
…
test_mha_attn.py
…
test_moe.py
…
test_nvfp4_quant.py
[NVIDIA] Support nvfp4 quantization (
#12784
)
2025-02-12 19:51:51 -08:00
test_permute_cols.py
…
test_pos_encoding.py
[BugFix] Enhance test_pos_encoding to support execution on multi-devices (
#13187
)
2025-02-16 08:59:49 +00:00
test_prefix_prefill.py
…
test_rocm_attention_selector.py
…
test_rotary_embedding.py
…
test_triton_decode_attention.py
…
test_triton_scaled_mm.py
…
test_utils.py
…
utils.py
…