vllm/benchmarks
Cyrus Leung 2f17117606
[mypy] Fix wrong type annotations related to tuple (#25660)
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
2025-09-25 13:00:45 +00:00
..

Benchmarks

This directory used to contain vLLM's benchmark scripts and utilities for performance testing and evaluation.

Contents

  • Serving benchmarks: Scripts for testing online inference performance (latency, throughput)
  • Throughput benchmarks: Scripts for testing offline batch inference performance
  • Specialized benchmarks: Tools for testing specific features like structured output, prefix caching, long document QA, request prioritization, and multi-modal inference
  • Dataset utilities: Framework for loading and sampling from various benchmark datasets (ShareGPT, HuggingFace datasets, synthetic data, etc.)

Usage

For detailed usage instructions, examples, and dataset information, see the Benchmark CLI documentation.

For full CLI reference see: