diff --git a/docs/contributing/model/README.md b/docs/contributing/model/README.md index 6c013738ac1ec..36068bc14876b 100644 --- a/docs/contributing/model/README.md +++ b/docs/contributing/model/README.md @@ -3,7 +3,7 @@ !!! important Many decoder language models can now be automatically loaded using the [Transformers backend][transformers-backend] without having to implement them in vLLM. See if `vllm serve ` works first! -vLLM models are specialized [PyTorch](https://pytorch.org/) models that take advantage of various [features](../../features/compatibility_matrix.md) to optimize their performance. +vLLM models are specialized [PyTorch](https://pytorch.org/) models that take advantage of various [features](../../features/README.md#compatibility-matrix) to optimize their performance. The complexity of integrating a model into vLLM depends heavily on the model's architecture. The process is considerably straightforward if the model shares a similar architecture with an existing model in vLLM.