diff --git a/docs/usage/v1_guide.md b/docs/usage/v1_guide.md
index baeb5411bcfd..03f313aaef0f 100644
--- a/docs/usage/v1_guide.md
+++ b/docs/usage/v1_guide.md
@@ -54,7 +54,7 @@ This living user guide outlines a few known **important changes and limitations*
| **FP8 KV Cache** | 🟢 Functional on Hopper devices ([PR #15191](https://github.com/vllm-project/vllm/pull/15191))|
| **Spec Decode** | 🚧 WIP ([PR #13933](https://github.com/vllm-project/vllm/pull/13933))|
| **Prompt Logprobs with Prefix Caching** | 🟡 Planned ([RFC #13414](https://github.com/vllm-project/vllm/issues/13414))|
-| **Structured Output Alternative Backends** | 🟡 Planned |
+| **Structured Output Alternative Backends** | 🟢 Functional |
| **Embedding Models** | 🚧 WIP ([PR #16188](https://github.com/vllm-project/vllm/pull/16188)) |
| **Mamba Models** | 🟡 Planned |
| **Encoder-Decoder Models** | 🟠Delayed |
@@ -132,13 +132,6 @@ in progress.
- **Multimodal Models**: V1 is almost fully compatible with V0 except that interleaved modality input is not supported yet.
See [here](https://github.com/orgs/vllm-project/projects/8) for the status of upcoming features and optimizations.
-#### Features to Be Supported
-
-- **Structured Output Alternative Backends**: Structured output alternative backends (outlines, guidance) support is planned. V1 currently
- supports only the `xgrammar:no_fallback` mode, meaning that it will error out if the output schema is unsupported by xgrammar.
- Details about the structured outputs can be found
- [here](https://docs.vllm.ai/en/latest/features/structured_outputs.html).
-
#### Models to Be Supported
vLLM V1 currently excludes model architectures with the `SupportsV0Only` protocol,