mirror of
https://git.datalinker.icu/vllm-project/vllm.git
synced 2026-01-23 22:04:29 +08:00
[docs] governance documents (#24801)
Signed-off-by: simon-mo <simon.mo@hey.com> Signed-off-by: Michael Goin <mgoin64@gmail.com> Signed-off-by: Simon Mo <simon.mo@hey.com> Co-authored-by: Mark McLoughlin <markmc@redhat.com> Co-authored-by: Russell Bryant <rbryant@redhat.com> Co-authored-by: Michael Goin <mgoin64@gmail.com> Co-authored-by: Wentao Ye <44945378+yewentao256@users.noreply.github.com> Co-authored-by: Luka Govedič <ProExpertProg@users.noreply.github.com> Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
This commit is contained in:
parent
2e660c2434
commit
77072e93b3
@ -59,6 +59,7 @@ nav:
|
||||
- CLI Reference: cli
|
||||
- Community:
|
||||
- community/*
|
||||
- Governance: governance
|
||||
- Blog: https://blog.vllm.ai
|
||||
- Forum: https://discuss.vllm.ai
|
||||
- Slack: https://slack.vllm.ai
|
||||
|
||||
43
docs/governance/collaboration.md
Normal file
43
docs/governance/collaboration.md
Normal file
@ -0,0 +1,43 @@
|
||||
# Collaboration Policy
|
||||
|
||||
This page outlines how vLLM collaborates with model providers, hardware vendors, and other stakeholders.
|
||||
|
||||
## Adding New Major Features
|
||||
|
||||
Anyone can contribute to vLLM. For major features, submit an RFC (request for comments) first. To submit an RFC, create an [issue](https://github.com/vllm-project/vllm/issues/new/choose) and select the `RFC` template.
|
||||
RFCs are similar to design docs that discuss the motivation, problem solved, alternatives considered, and proposed change.
|
||||
|
||||
Once you submit the RFC, please post it in the #contributors channel in vLLM Slack, and loop in area owners and committers for feedback.
|
||||
For high-interest features, the committers nominate a person to help with the RFC process and PR review. This makes sure someone is guiding you through the process. It is reflected as the "assignee" field in the RFC issue.
|
||||
If the assignee and lead maintainers find the feature to be contentious, the maintainer team aims to make decisions quickly after learning the details from everyone. This involves assigning a committer as the DRI (Directly Responsible Individual) to make the decision and shepherd the code contribution process.
|
||||
|
||||
For features that you intend to maintain, please feel free to add yourself in [`mergify.yml`](https://github.com/vllm-project/vllm/blob/main/.github/mergify.yml) to receive notifications and auto-assignment when the PRs touching the feature you are maintaining. Over time, the ownership will be evaluated and updated through the committers nomination and voting process.
|
||||
|
||||
## Adding New Models
|
||||
|
||||
If you use vLLM, we recommend you making the model work with vLLM by following the [model registration](../contributing/model/registration.md) process before you release it publicly.
|
||||
|
||||
The vLLM team helps with new model architectures not supported by vLLM, especially models pushing architectural frontiers.
|
||||
Here's how the vLLM team works with model providers. The vLLM team includes all [committers](./committers.md) of the project. model providers can exclude certain members but shouldn't, as this may harm release timelines due to missing expertise. Contact [project leads](./process.md) if you want to collaborate.
|
||||
|
||||
Once we establish the connection between the vLLM team and model provider:
|
||||
|
||||
- The vLLM team learns the model architecture and relevant changes, then plans which area owners to involve and what features to include.
|
||||
- The vLLM team creates a private communication channel (currently a Slack channel in the vLLM workspace) and a private fork within the vllm-project organization. The model provider team can invite others to the channel and repo.
|
||||
- Third parties like compute providers, hosted inference providers, hardware vendors, and other organizations often work with both the model provider and vLLM on model releases. We establish direct communication (with permission) or three-way communication as needed.
|
||||
|
||||
The vLLM team works with model providers on features, integrations, and release timelines. We work to meet release timelines, but engineering challenges like feature development, model accuracy alignment, and optimizations can cause delays.
|
||||
|
||||
The vLLM maintainers will not publicly share details about model architecture, release timelines, or upcoming releases. We maintain model weights on secure servers with security measures (though we can work with security reviews and testing without certification). We delete pre-release weights or artifacts upon request.
|
||||
|
||||
The vLLM team collaborates on marketing and promotional efforts for model releases. model providers can use vLLM's trademark and logo in publications and materials.
|
||||
|
||||
## Adding New Hardware
|
||||
|
||||
vLLM is designed as a platform for frontier model architectures and high-performance accelerators.
|
||||
For new hardware, follow the [hardware plugin](../design/plugin_system.md) system to add support.
|
||||
Use the platform plugin system to add hardware support.
|
||||
As hardware gains popularity, we help endorse it in our documentation and marketing materials.
|
||||
The vLLM GitHub organization can host hardware plugin repositories, especially for collaborative efforts among companies.
|
||||
|
||||
We rarely add new hardware to vLLM directly. Instead, we make existing hardware platforms modular to keep the vLLM core hardware-agnostic.
|
||||
183
docs/governance/committers.md
Normal file
183
docs/governance/committers.md
Normal file
@ -0,0 +1,183 @@
|
||||
# Committers
|
||||
|
||||
This document lists the current committers of the vLLM project and the core areas they maintain.
|
||||
Committers have write access to the vLLM repository and are responsible for reviewing and merging PRs.
|
||||
You can also refer to the [CODEOWNERS](https://github.com/vllm-project/vllm/blob/main/.github/CODEOWNERS) file for concrete file-level ownership and reviewers. Both this documents and the CODEOWNERS file are living documents and they complement each other.
|
||||
|
||||
## Active Committers
|
||||
|
||||
We try to summarize each committer's role in vLLM in a few words. In general, vLLM committers cover a wide range of areas and help each other in the maintenance process.
|
||||
Please refer to the later section about Area Owners for exact component ownership details.
|
||||
Sorted alphabetically by GitHub handle:
|
||||
|
||||
- [@22quinn](https://github.com/22quinn): RL API
|
||||
- [@aarnphm](https://github.com/aarnphm): Structured output
|
||||
- [@alexm-redhat](https://github.com/alexm-redhat): Performance
|
||||
- [@ApostaC](https://github.com/ApostaC): Connectors, offloading
|
||||
- [@benchislett](https://github.com/benchislett): Engine core and spec decode
|
||||
- [@bigPYJ1151](https://github.com/bigPYJ1151): Intel CPU/XPU integration
|
||||
- [@chaunceyjiang](https://github.com/chaunceyjiang): Tool use and reasoning parser
|
||||
- [@DarkLight1337](https://github.com/DarkLight1337): Multimodality, API server
|
||||
- [@esmeetu](https://github.com/esmeetu): developer marketing, community
|
||||
- [@gshtras](https://github.com/gshtras): AMD integration
|
||||
- [@heheda12345](https://github.com/heheda12345): Hybrid memory allocator
|
||||
- [@hmellor](https://github.com/hmellor): Hugging Face integration, documentation
|
||||
- [@houseroad](https://github.com/houseroad): Engine core and Llama models
|
||||
- [@Isotr0py](https://github.com/Isotr0py): Multimodality, new model support
|
||||
- [@jeejeelee](https://github.com/jeejeelee): LoRA, new model support
|
||||
- [@jikunshang](https://github.com/jikunshang): Intel CPU/XPU integration
|
||||
- [@khluu](https://github.com/khluu): CI infrastructure
|
||||
- [@KuntaiDu](https://github.com/KuntaiDu): KV Connector
|
||||
- [@LucasWilkinson](https://github.com/LucasWilkinson): Kernels and performance
|
||||
- [@luccafong](https://github.com/luccafong): Llama models, speculative decoding, distributed
|
||||
- [@markmc](https://github.com/markmc): Observability
|
||||
- [@mgoin](https://github.com/mgoin): Quantization and performance
|
||||
- [@NickLucche](https://github.com/NickLucche): KV connector
|
||||
- [@njhill](https://github.com/njhill): Distributed, API server, engine core
|
||||
- [@noooop](https://github.com/noooop): Pooling models
|
||||
- [@patrickvonplaten](https://github.com/patrickvonplaten): Mistral models, new model support
|
||||
- [@pavanimajety](https://github.com/pavanimajety): NVIDIA GPU integration
|
||||
- [@ProExpertProg](https://github.com/ProExpertProg): Compilation, startup UX
|
||||
- [@robertgshaw2-redhat](https://github.com/robertgshaw2-redhat): Core, distributed, disagg
|
||||
- [@ruisearch42](https://github.com/ruisearch42): Pipeline parallelism, Ray Support
|
||||
- [@russellb](https://github.com/russellb): Structured output, engine core, security
|
||||
- [@sighingnow](https://github.com/sighingnow): Qwen models, new model support
|
||||
- [@simon-mo](https://github.com/simon-mo): Project lead, API entrypoints, community
|
||||
- [@tdoublep](https://github.com/tdoublep): State space models
|
||||
- [@tjtanaa](https://github.com/tjtanaa): AMD GPU integration
|
||||
- [@tlrmchlsmth](https://github.com/tlrmchlsmth): Kernels and performance, distributed, disagg
|
||||
- [@WoosukKwon](https://github.com/WoosukKwon): Project lead, engine core
|
||||
- [@yaochengji](https://github.com/yaochengji): TPU integration
|
||||
- [@yeqcharlotte](https://github.com/yeqcharlotte): Benchmark, Llama models
|
||||
- [@yewentao256](https://github.com/yewentao256): Kernels and performance
|
||||
- [@Yikun](https://github.com/Yikun): Pluggable hardware interface
|
||||
- [@youkaichao](https://github.com/youkaichao): Project lead, distributed, compile, community
|
||||
- [@ywang96](https://github.com/ywang96): Multimodality, benchmarks
|
||||
- [@zhuohan123](https://github.com/zhuohan123): Project lead, RL integration, numerics
|
||||
- [@zou3519](https://github.com/zou3519): Compilation
|
||||
|
||||
### Emeritus Committers
|
||||
|
||||
Committers who have contributed to vLLM significantly in the past (thank you!) but no longer active:
|
||||
|
||||
- [@andoorve](https://github.com/andoorve): Pipeline parallelism
|
||||
- [@cadedaniel](https://github.com/cadedaniel): Speculative decoding
|
||||
- [@comaniac](https://github.com/comaniac): KV cache management, pipeline parallelism
|
||||
- [@LiuXiaoxuanPKU](https://github.com/LiuXiaoxuanPKU): Speculative decoding
|
||||
- [@pcmoritz](https://github.com/pcmoritz): MoE
|
||||
- [@rkooo567](https://github.com/rkooo567): Chunked prefill
|
||||
- [@sroy745](https://github.com/sroy745): Speculative decoding
|
||||
- [@Yard1](https://github.com/Yard1): kernels and performance
|
||||
- [@zhisbug](https://github.com/zhisbug): Arctic models, distributed
|
||||
|
||||
## Area Owners
|
||||
|
||||
This section breaks down the active committers by vLLM components and lists the area owners.
|
||||
If you have PRs touching the area, please feel free to ping the area owner for review.
|
||||
|
||||
### Engine Core
|
||||
|
||||
- Scheduler: the core vLLM engine loop scheduling requests to next batch
|
||||
- @WoosukKwon, @robertgshaw2-redhat, @njhill, @heheda12345
|
||||
- KV Cache Manager: memory management layer within scheduler maintaining KV cache logical block data
|
||||
- @heheda12345, @WoosukKwon
|
||||
- AsyncLLM: the zmq based protocol hosting engine core and making it accessible for entrypoints
|
||||
- @robertgshaw2-redhat, @njhill, @russellb
|
||||
- ModelRunner, Executor, Worker: the abstractions for engine wrapping model implementation
|
||||
- @WoosukKwon, @tlrmchlsmth, @heheda12345, @LucasWilkinson, @ProExpertProg
|
||||
- KV Connector: Connector interface and implementation for KV cache offload and transfer
|
||||
- @robertgshaw2-redhat, @njhill, @KuntaiDu, @NickLucche, @ApostaC
|
||||
- Distributed, Parallelism, Process Management: Process launchers managing each worker, and assign them to the right DP/TP/PP/EP ranks
|
||||
- @youkaichao, @njhill, @WoosukKwon, @ruisearch42
|
||||
- Collectives: the usage of nccl and other communication libraries/kernels
|
||||
- @tlrmchlsmth, @youkaichao
|
||||
- Multimodality engine and memory management: core scheduling and memory management concerning vision, audio, and video inputs.
|
||||
- @ywang96, @DarkLight1337
|
||||
|
||||
### Model Implementations
|
||||
|
||||
- Model Interface: The `nn.Module` interface and implementation for various models
|
||||
- @zhuohan123, @mgoin, @simon-mo, @houseroad, @ywang96 (multimodality), @jeejeelee (lora)
|
||||
- Logits Processors / Sampler: The provided sampler class and pluggable logits processors
|
||||
- @njhill, @houseroad, @22quinn
|
||||
- Custom Layers: Utility layers in vLLM such as rotary embedding and rms norms
|
||||
- @ProExpertProg
|
||||
- Attention: Attention interface for paged attention
|
||||
- @WoosukKwon, @LucasWilkinson, @heheda12345
|
||||
- FusedMoE: FusedMoE kernel, Modular kernel framework, EPLB
|
||||
- @tlrmchlsmth
|
||||
- Quantization: Various quantization config, weight loading, and kernel.
|
||||
- @mgoin, @Isotr0py, @yewentao256
|
||||
- Custom quantized GEMM kernels (cutlass_scaled_mm, marlin, machete)
|
||||
- @tlrmchlsmth, @LucasWilkinson
|
||||
- Multi-modal Input Processing: Components that load and process image/video/audio data into feature tensors
|
||||
- @DarkLight1337, @ywang96, @Isotr0py
|
||||
- torch compile: The torch.compile integration in vLLM, custom passes & transformations
|
||||
- @ProExpertProg, @zou3519, @youkaichao
|
||||
- State space models: The state space models implementation in vLLM
|
||||
- @tdoublep, @tlrmchlsmth
|
||||
- Reasoning and tool calling parsers
|
||||
- @chaunceyjiang, @aarnphm
|
||||
|
||||
### Entrypoints
|
||||
|
||||
- LLM Class: The LLM class for offline inference
|
||||
- @DarkLight1337
|
||||
- API Server: The OpenAI-compatible API server
|
||||
- @DarkLight1337, @njhill, @aarnphm, @simon-mo, @heheda12345 (Responses API)
|
||||
- Batch Runner: The OpenAI-compatible batch runner
|
||||
- @simon-mo
|
||||
|
||||
### Features
|
||||
|
||||
- Spec Decode: Covers model definition, attention, sampler, and scheduler related to n-grams, EAGLE, and MTP.
|
||||
- @WoosukKwon, @benchislett, @luccafong
|
||||
- Structured Output: The structured output implementation
|
||||
- @russellb, @aarnphm
|
||||
- RL: The RL related features such as collective rpc, sleep mode, etc.
|
||||
- @youkaichao, @zhuohan123, @22quinn
|
||||
- LoRA: @jeejeelee
|
||||
- Observability: Metrics and Logging
|
||||
- @markmc, @robertgshaw2-redhat, @simon-mo
|
||||
|
||||
### Code Base
|
||||
|
||||
- Config: Configuration registration and parsing
|
||||
- @hmellor
|
||||
- Documentation: @hmellor, @DarkLight1337, @simon-mo
|
||||
- Benchmarks: @ywang96, @simon-mo
|
||||
- CI, Build, Release Process: @khluu, @njhill, @simon-mo
|
||||
- Security: @russellb
|
||||
|
||||
### External Kernels Integration
|
||||
|
||||
- FlashAttention: @LucasWilkinson
|
||||
- FlashInfer: @LucasWilkinson, @mgoin, @WoosukKwon
|
||||
- Blackwell Kernels: @mgoin, @yewentao256
|
||||
- DeepEP/DeepGEMM/pplx: @mgoin, @yewentao256
|
||||
|
||||
### Integrations
|
||||
|
||||
- Hugging Face: @hmellor, @Isotr0py
|
||||
- Ray: @ruisearch42
|
||||
- NIXL: @robertgshaw2-redhat, @NickLucche
|
||||
|
||||
### Collaboration with Model Vendors
|
||||
|
||||
- gpt-oss: @heheda12345, @simon-mo, @zhuohan123
|
||||
- Llama: @luccafong
|
||||
- Qwen: @sighingnow
|
||||
- Mistral: @patrickvonplaten
|
||||
|
||||
### Hardware
|
||||
|
||||
- Plugin Interface: @youkaichao, @Yikun
|
||||
- NVIDIA GPU: @pavanimajety
|
||||
- AMD GPU: @gshtras, @tjtanaa
|
||||
- Intel CPU/GPU: @jikunshang, @bigPYJ1151
|
||||
- Google TPU: @yaochengji
|
||||
|
||||
### Ecosystem Projects
|
||||
|
||||
- Ascend NPU: [@wangxiyuan](https://github.com/wangxiyuan) and [see more details](https://vllm-ascend.readthedocs.io/en/latest/community/contributors.html#maintainers)
|
||||
- Intel Gaudi HPU [@xuechendi](https://github.com/xuechendi) and [@kzawora-intel](https://github.com/kzawora-intel)
|
||||
125
docs/governance/process.md
Normal file
125
docs/governance/process.md
Normal file
@ -0,0 +1,125 @@
|
||||
# Governance Process
|
||||
|
||||
vLLM's success comes from our strong open source community. We favor informal, meritocratic norms over formal policies. This document clarifies our governance philosophy and practices.
|
||||
|
||||
## Values
|
||||
|
||||
vLLM aims to be the fastest and easiest-to-use LLM inference and serving engine. We stay current with advances, enable innovation, and support diverse models, modalities, and hardware.
|
||||
|
||||
### Design Values
|
||||
|
||||
1. **Top performance**: System performance is our top priority. We monitor overheads, optimize kernels, and publish benchmarks. We never leave performance on the table.
|
||||
2. **Ease of use**: vLLM must be simple to install, configure, and operate. We provide clear documentation, fast startup, clean logs, helpful error messages, and monitoring guides. Many users fork our code or study it deeply, so we keep it readable and modular.
|
||||
3. **Wide coverage**: vLLM supports frontier models and high-performance accelerators. We make it easy to add new models and hardware. vLLM + PyTorch form a simple interface that avoids complexity.
|
||||
4. **Production ready**: vLLM runs 24/7 in production. It must be easy to operate and monitor for health issues.
|
||||
5. **Extensibility**: vLLM serves as fundamental LLM infrastructure. Our codebase cannot cover every use case, so we design for easy forking and customization.
|
||||
|
||||
### Collaboration Values
|
||||
|
||||
1. **Tightly Knit and Fast-Moving**: Our maintainer team is aligned on vision, philosophy, and roadmap. We work closely to unblock each other and move quickly.
|
||||
2. **Individual Merit**: No one buys their way into governance. Committer status belongs to individuals, not companies. We reward contribution, maintenance, and project stewardship.
|
||||
|
||||
## Project Maintainers
|
||||
|
||||
Maintainers form a hierarchy based on sustained, high-quality contributions and alignment with our design philosophy.
|
||||
|
||||
### Core Maintainers
|
||||
|
||||
Core Maintainers function like a project planning and decision making committee. In other convention, they might be called a Technical Steering Committee (TSC). In vLLM vocabulary, they are often known as "Project Leads". They meet weekly to coordinate roadmap priorities and allocate engineering resources. Current active leads: @WoosukKwon, @zhuohan123, @simon-mo, @youkaichao, @robertshaw2-redhat, @tlrmchlsmth, @mgoin, @njhill, @ywang96, @houseroad, @yeqcharlotte, @ApostaC
|
||||
|
||||
The responsibilities of the core maintainers are:
|
||||
|
||||
* Author quarterly roadmap and responsible for each development effort.
|
||||
* Making major changes to the technical direction or scope of vLLM and vLLM projects.
|
||||
* Defining the project's release strategy.
|
||||
* Work with model providers, hardware vendors, and key users of vLLM to ensure the project is on the right track.
|
||||
|
||||
### Lead Maintainers
|
||||
|
||||
While Core maintainers assume the day-to-day responsibilities of the project, Lead maintainers are responsible for the overall direction and strategy of the project. A committee of @WoosukKwon, @zhuohan123, @simon-mo, and @youkaichao currently shares this role with divided responsibilities.
|
||||
|
||||
The responsibilities of the lead maintainers are:
|
||||
|
||||
* Making decisions where consensus among core maintainers cannot be reached.
|
||||
* Adopting changes to the project's technical governance.
|
||||
* Organizing the voting process for new committers.
|
||||
|
||||
### Committers and Area Owners
|
||||
|
||||
Committers have write access and merge rights. They typically have deep expertise in specific areas and help the community.
|
||||
|
||||
The responsibilities of the committers are:
|
||||
|
||||
* Reviewing PRs and providing feedback.
|
||||
* Addressing issues and questions from the community.
|
||||
* Own specific areas of the codebase and development efforts: reviewing PRs, addressing issues, answering questions, improving documentation.
|
||||
|
||||
Specially, committers are almost all area owners. They author subsystems, review PRs, refactor code, monitor tests, and ensure compatibility with other areas. All area owners are committers with deep expertise in that area, but not all committers own areas.
|
||||
|
||||
For a full list of committers and their respective areas, see the [committers](./committers.md) page.
|
||||
|
||||
#### Nomination Process
|
||||
|
||||
Any committer can nominate candidates via our private mailing list:
|
||||
|
||||
1. **Nominate**: Any committer may nominate a candidate by email to the private maintainers’ list, citing evidence mapped to the pre‑existing standards with links to PRs, reviews, RFCs, issues, benchmarks, and adoption evidence.
|
||||
2. **Vote**: The lead maintainers will group voices support or concerns. Shared concerns can stop the process. The vote typically last 3 working days. For concerns, committers group discuss the clear criteria for such person to be nominated again. The lead maintainers will make the final decision.
|
||||
3. **Confirm**: The lead maintainers send invitation, update CODEOWNERS, assign permissions, add to communications channels (mailing list and Slack).
|
||||
|
||||
Committership is highly selective and merit based. The selection criteria requires:
|
||||
|
||||
* **Area expertise**: leading design/implementation of core subsystems, material performance or reliability improvements adopted project‑wide, or accepted RFCs that shape technical direction.
|
||||
* **Sustained contributions**: high‑quality merged contributions and reviews across releases, responsiveness to feedback, and stewardship of code health.
|
||||
* **Community leadership**: mentoring contributors, triaging issues, improving docs, and elevating project standards.
|
||||
|
||||
To further illustrate, a committer typically satisfies at least two of the following accomplishment patterns:
|
||||
|
||||
* Author of an accepted RFC or design that materially shaped project direction
|
||||
* Measurable, widely adopted performance or reliability improvement in core paths
|
||||
* Long‑term ownership of a subsystem with demonstrable quality and stability gains
|
||||
* Significant cross‑project compatibility or ecosystem enablement work (models, hardware, tooling)
|
||||
|
||||
While there isn't a quantitative bar, past committers have:
|
||||
|
||||
* Submitted approximately 30+ PRs of substantial quality and scope
|
||||
* Provided high-quality reviews of approximately 10+ substantial external contributor PRs
|
||||
* Addressed multiple issues and questions from the community in issues/forums/Slack
|
||||
* Led concentrated efforts on RFCs and their implementation, or significant performance or reliability improvements adopted project‑wide
|
||||
|
||||
### Working Groups
|
||||
|
||||
vLLM runs informal working groups such as CI, CI infrastructure, torch compile, and startup UX. These can be loosely tracked via `#sig-` (or `#feat-`) channels in vLLM Slack. Some groups have regular sync meetings.
|
||||
|
||||
### Advisory Board
|
||||
|
||||
vLLM project leads consult with an informal advisory board that is composed of model providers, hardware vendors, and ecosystem partners. This manifests as a collaboration channel in Slack and frequent communications.
|
||||
|
||||
## Process
|
||||
|
||||
### Project Roadmap
|
||||
|
||||
Project Leads publish quarterly roadmaps as GitHub issues. These clarify current priorities. Unlisted topics aren't excluded but may get less review attention. See [https://roadmap.vllm.ai/](https://roadmap.vllm.ai/).
|
||||
|
||||
### Decision Making
|
||||
|
||||
We make technical decisions in Slack and GitHub using RFCs and design docs. Discussion may happen elsewhere, but we maintain public records of significant changes: problem statements, rationale, and alternatives considered.
|
||||
|
||||
### Merging Code
|
||||
|
||||
Contributors and maintainers often collaborate closely on code changes, especially within organizations or specific areas. Maintainers should give others appropriate review opportunities based on change significance.
|
||||
|
||||
PRs requires at least one committer review and approval. If the code is covered by CODEOWNERS, the PR should be reviewed by the CODEOWNERS. There are cases where the code is trivial or hotfix, the PR can be merged by the lead maintainers directly.
|
||||
|
||||
In case where CI didn't pass due to the failure is not related to the PR, the PR can be merged by the lead maintainers using "force merge" option that overrides the CI checks.
|
||||
|
||||
### Slack
|
||||
|
||||
Contributors are encouraged to join `#pr-reviews` and `#contributors` channels.
|
||||
|
||||
There are `#sig-` and `#feat-` channels for discussion and coordination around specific topics.
|
||||
|
||||
The project maintainer group also uses a private channel for high-bandwidth collaboration.
|
||||
|
||||
### Meetings
|
||||
|
||||
We hold weekly contributor syncs with standup-style updates on progress, blockers, and plans. You can refer to the notes [standup.vllm.ai](https://standup.vllm.ai) for joining instructions.
|
||||
Loading…
x
Reference in New Issue
Block a user