Logo
Explore Help
Sign In
xinyun/vllm
1
0
Fork 0
You've already forked vllm
mirror of https://git.datalinker.icu/vllm-project/vllm.git synced 2026-01-18 17:24:40 +08:00
Code Issues Packages Projects Releases Wiki Activity
vllm/tests/v1/tpu
History
Chengji Yao 5b4ba2e1e1 [TPU][Bugfix] fix the missing apply_model in tpu worker (#25526)
Signed-off-by: Chengji Yao <chengjiyao@google.com>
Signed-off-by: yewentao256 <zhyanwentao@126.com>
2025-10-03 13:35:54 -07:00
..
worker
[Core] Use CpuGpuBuffer for block table tensors (#24795)
2025-09-16 19:18:06 -07:00
__init__.py
…
test_basic.py
[TPU][Test] Rollback PR-21550. (#21619)
2025-07-25 13:22:01 -07:00
test_kv_cache_update_kernel.py
[TPU] kv cache update kernel doesn't need to be padded slices to multiple of num_slices_per_block (#22394)
2025-08-09 20:49:04 -07:00
test_mha_attn.py
[V0 Deprecation][TPU] Remove V1 flag check from tests (#22248)
2025-08-05 06:53:23 -07:00
test_multimodal.py
[CI/Build] Serve images used by multimodal tests through local HTTP Server (#23907)
2025-09-03 16:13:11 +08:00
test_pallas.py
[Attention][FlashInfer] Enable FP8 FlashInfer (TRTLLM) MLA decode (#24705)
2025-09-12 15:45:53 -06:00
test_perf.py
…
test_sampler.py
[V0 Deprecation][TPU] Remove V1 flag check from tests (#22248)
2025-08-05 06:53:23 -07:00
test_spmd_model_weight_loading.py
…
test_topk_topp_sampler.py
[TPU] Deprecate xm.mark_step in favor of `torch_xla.sync (#25254)
2025-10-03 13:35:53 -07:00
test_tpu_int8.py
[TPU][Bugfix] fix the missing apply_model in tpu worker (#25526)
2025-10-03 13:35:54 -07:00
test_tpu_qkv_linear.py
…
Powered by Gitea Version: 1.23.1 Page: 8438ms Template: 4ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API