This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-16 06:35:01 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
v1
History
Woosuk Kwon
3f1fc7425a
[V1][CI/Test] Do basic test for top-p & top-k sampling (
#12469
)
...
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
2025-01-27 09:40:04 -08:00
..
core
[V1] Add
uncache_blocks
(
#12333
)
2025-01-23 04:19:21 +00:00
e2e
[V1] Implement Cascade Attention (
#11635
)
2025-01-01 21:56:46 +09:00
engine
[V1][CI/Test] Do basic test for top-p & top-k sampling (
#12469
)
2025-01-27 09:40:04 -08:00
sample
[V1] Use FlashInfer Sampling Kernel for Top-P & Top-K Sampling (
#11394
)
2024-12-27 09:32:38 +09:00
worker
[V1] Adding min tokens/repetition/presence/frequence penalties to V1 sampler (
#10681
)
2024-12-26 19:02:58 +09:00
__init__.py
[V1]
AsyncLLM
Implementation (
#9826
)
2024-11-11 23:05:38 +00:00
test_stats.py
[v1][stats][1/n] Add RequestStatsUpdate and RequestStats types (
#10907
)
2025-01-21 11:51:13 -08:00
test_utils.py
[V1] Move more control of kv cache initialization from model_executor to EngineCore (
#11960
)
2025-01-17 07:39:35 +00:00