From c579b750a083931ad03ecac898aca5ad67c6c59c Mon Sep 17 00:00:00 2001 From: Zhuohan Li Date: Mon, 13 May 2024 18:48:00 -0700 Subject: [PATCH] [Doc] Add meetups to the doc (#4798) --- docs/source/community/meetups.rst | 12 ++++++++++++ docs/source/index.rst | 7 +++++++ 2 files changed, 19 insertions(+) create mode 100644 docs/source/community/meetups.rst diff --git a/docs/source/community/meetups.rst b/docs/source/community/meetups.rst new file mode 100644 index 000000000000..fa1a26521814 --- /dev/null +++ b/docs/source/community/meetups.rst @@ -0,0 +1,12 @@ +.. _meetups: + +vLLM Meetups +============ + +We host regular meetups in San Francisco Bay Area every 2 months. We will share the project updates from the vLLM team and have guest speakers from the industry to share their experience and insights. Please find the materials of our previous meetups below: + +- `The third vLLM meetup `_, with Roblox, April 2nd 2024, `slides `_. +- `The second vLLM meetup `_, with IBM Research, January 31st 2024, `slides `_, `video (vLLM Update) `_, `video (IBM Research & torch.compile) `_. +- `The first vLLM meetup `_, with a16z, October 5th, 2023, `slides `_. + +We are always looking for speakers and sponsors at San Francisco Bay Area and potentially other locations. If you are interested in speaking or sponsoring, please contact us at `vllm-questions@lists.berkeley.edu `_. diff --git a/docs/source/index.rst b/docs/source/index.rst index e1e81778dbdb..bab00e28e401 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -50,6 +50,7 @@ For more information, check out the following: * `vLLM announcing blog post `_ (intro to PagedAttention) * `vLLM paper `_ (SOSP 2023) * `How continuous batching enables 23x throughput in LLM inference while reducing p50 latency `_ by Cade Daniel et al. +* :ref:`vLLM Meetups `. @@ -112,6 +113,12 @@ Documentation dev/kernel/paged_attention dev/dockerfile/dockerfile +.. toctree:: + :maxdepth: 2 + :caption: Community + + community/meetups + Indices and tables ==================