This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2026-04-07 13:27:05 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
v1
/
engine
History
David Xia
de71fec81b
[CI] don't skip fixed
test_kv_cache_events()
(
#18183
)
...
Signed-off-by: David Xia <david@davidxia.com>
2025-05-14 23:17:16 -07:00
..
__init__.py
…
conftest.py
[V1][Metrics] add support for kv event publishing (
#16750
)
2025-04-30 07:44:45 -07:00
test_async_llm.py
[V1][Metrics] Allow V1 AsyncLLM to use custom logger (
#14661
)
2025-04-25 22:05:40 -07:00
test_engine_args.py
[V1] Revert the default
max_num_seqs
to V0 values for most hardware (
#16158
)
2025-04-07 13:54:36 -04:00
test_engine_core_client.py
[CI] don't skip fixed
test_kv_cache_events()
(
#18183
)
2025-05-14 23:17:16 -07:00
test_engine_core.py
[Core] Prevent side-channel attacks via cache salting (
#17045
)
2025-04-30 20:27:21 +08:00
test_llm_engine.py
[Core] Update dtype detection and defaults (
#14858
)
2025-03-19 13:49:33 +08:00
test_output_processor.py
[Core] Prevent side-channel attacks via cache salting (
#17045
)
2025-04-30 20:27:21 +08:00
utils.py
Simplify
TokenizerGroup
(
#16790
)
2025-04-24 04:43:56 -07:00