This website requires JavaScript.
Explore
Help
Sign In
xinyun
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced
2025-12-24 17:25:51 +08:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
vllm
/
tests
/
v1
History
Woosuk Kwon
73001445fb
[V1] Implement Cascade Attention (
#11635
)
...
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
2025-01-01 21:56:46 +09:00
..
core
[V1] Simpify vision block hash for prefix caching by removing offset from hash (
#11646
)
2024-12-31 08:56:01 +00:00
e2e
[V1] Implement Cascade Attention (
#11635
)
2025-01-01 21:56:46 +09:00
engine
[V1] [5/N] API Server: unify
Detokenizer
and
EngineCore
input (
#11545
)
2024-12-28 20:51:57 +00:00
sample
[V1] Use FlashInfer Sampling Kernel for Top-P & Top-K Sampling (
#11394
)
2024-12-27 09:32:38 +09:00
worker
[V1] Adding min tokens/repetition/presence/frequence penalties to V1 sampler (
#10681
)
2024-12-26 19:02:58 +09:00
__init__.py
[V1]
AsyncLLM
Implementation (
#9826
)
2024-11-11 23:05:38 +00:00