diff --git a/docs/design/paged_attention.md b/docs/design/paged_attention.md index 5cc5878425515..53368ab1a79fa 100644 --- a/docs/design/paged_attention.md +++ b/docs/design/paged_attention.md @@ -139,18 +139,18 @@ token data. const scalar_t* q_ptr = q + seq_idx * q_stride + head_idx * HEAD_SIZE; ``` -
- ![](../assets/design/paged_attention/query.png){ align="center" alt="query" width="70%" } -
+

+ query +

Each thread defines its own `q_ptr` which points to the assigned query token data on global memory. For example, if `VEC_SIZE` is 4 and `HEAD_SIZE` is 128, the `q_ptr` points to data that contains total of 128 elements divided into 128 / 4 = 32 vecs. -
- ![](../assets/design/paged_attention/q_vecs.png){ align="center" alt="q_vecs" width="70%" } -
+

+ q_vecs +

```cpp __shared__ Q_vec q_vecs[THREAD_GROUP_SIZE][NUM_VECS_PER_THREAD]; @@ -187,9 +187,9 @@ key token at different iterations. As shown above, that `k_ptr` points to key token data based on `k_cache` at assigned block, assigned head and assigned token. -
- ![](../assets/design/paged_attention/key.png){ align="center" alt="key" width="70%" } -
+

+ key +

The diagram above illustrates the memory layout for key data. It assumes that the `BLOCK_SIZE` is 16, `HEAD_SIZE` is 128, `x` is @@ -202,9 +202,9 @@ iterations. Inside each rectangle, there are a total 32 vecs (128 elements for one token) that will be processed by 2 threads (one thread group) separately. -
- ![](../assets/design/paged_attention/k_vecs.png){ align="center" alt="k_vecs" width="70%" } -
+

+ k_vecs +

```cpp K_vec k_vecs[NUM_VECS_PER_THREAD] @@ -361,17 +361,17 @@ later steps. Now, it should store the normalized softmax result of ## Value -
- ![](../assets/design/paged_attention/value.png){ align="center" alt="value" width="70%" } -
+

+ value +

-
- ![](../assets/design/paged_attention/logits_vec.png){ align="center" alt="logits_vec" width="50%" } -
+

+ logits_vec +

-
- ![](../assets/design/paged_attention/v_vec.png){ align="center" alt="v_vec" width="70%" } -
+

+ v_vec +

Now we need to retrieve the value data and perform dot multiplication with `logits`. Unlike query and key, there is no thread group diff --git a/docs/features/quantization/quantized_kvcache.md b/docs/features/quantization/quantized_kvcache.md index d26a5e217f314..586117272d3ba 100644 --- a/docs/features/quantization/quantized_kvcache.md +++ b/docs/features/quantization/quantized_kvcache.md @@ -17,6 +17,16 @@ The E4M3 format offers higher precision compared to E5M2. However, due to its sm For now, only per-tensor (scalar) scaling factors are supported. Development is ongoing to support scaling factors of a finer granularity (e.g. per-channel). +### How FP8 KV Cache Works + +The FP8 KV cache implementation follows this workflow: + +1. **Storage**: Key and Value tensors are quantized to FP8 format using scaling factors before being stored in the KV cache +2. **Retrieval**: When needed for attention computation, cached KV tensors are dequantized back to higher precision (FP16/BF16) +3. **Attention**: The attention-value multiplication (softmax output × V) is performed using the dequantized higher-precision V tensor + +This means the final attention computation operates on dequantized values, not FP8 tensors. The quantization reduces memory usage during storage but maintains computation accuracy by using higher precision during the actual attention operations. + ### Performance Impact The current FP8 KV cache implementation primarily benefits throughput by allowing approximately double the amount of space for KV cache allocation. This enables either: