From 0eca5eacd07061c5692250d0ef86267da831bd9f Mon Sep 17 00:00:00 2001 From: Se7en Date: Mon, 9 Jun 2025 17:30:02 +0800 Subject: [PATCH] [Doc] Fix description in the Automatic Prefix Caching design doc (#19333) Signed-off-by: cr7258 --- docs/design/v1/prefix_caching.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/design/v1/prefix_caching.md b/docs/design/v1/prefix_caching.md index bbdfb255214dd..e87e4c6a48b73 100644 --- a/docs/design/v1/prefix_caching.md +++ b/docs/design/v1/prefix_caching.md @@ -144,7 +144,7 @@ As a result, we will have the following components when the KV cache manager is **Running request:** Workflow for the scheduler to schedule a running request with KV cache block allocation: -1. The scheduler calls `kv_cache_manager.append_slots()`. It does the following steps: +1. The scheduler calls `kv_cache_manager.allocate_slots()`. It does the following steps: 1. Compute the number of new required blocks, and return if there are no sufficient blocks to allocate. 2. Allocate new blocks by popping the heads of the free queue. If the head block is a cached block, this also “evicts” the block so that no other requests can reuse it anymore from now on. 3. Append token IDs to the slots in existing blocks as well as the new blocks. If a block is full, we add it to the Cache Block to cache it.