diff --git a/docs/source/contributing/profiling/profiling_index.md b/docs/source/contributing/profiling/profiling_index.md index 79aeb292a9b7..3d044f890382 100644 --- a/docs/source/contributing/profiling/profiling_index.md +++ b/docs/source/contributing/profiling/profiling_index.md @@ -1,15 +1,15 @@ # Profiling vLLM +:::{warning} +Profiling is only intended for vLLM developers and maintainers to understand the proportion of time spent in different parts of the codebase. **vLLM end-users should never turn on profiling** as it will significantly slow down the inference. +::: + We support tracing vLLM workers using the `torch.profiler` module. You can enable tracing by setting the `VLLM_TORCH_PROFILER_DIR` environment variable to the directory where you want to save the traces: `VLLM_TORCH_PROFILER_DIR=/mnt/traces/` The OpenAI server also needs to be started with the `VLLM_TORCH_PROFILER_DIR` environment variable set. When using `benchmarks/benchmark_serving.py`, you can enable profiling by passing the `--profile` flag. -:::{warning} -Only enable profiling in a development environment. -::: - Traces can be visualized using . :::{tip}