Logo
Explore Help
Sign In
xinyun/vllm
1
0
Fork 0
You've already forked vllm
mirror of https://git.datalinker.icu/vllm-project/vllm.git synced 2026-03-27 05:31:19 +08:00
Code Issues Packages Projects Releases Wiki Activity
vllm/vllm
History
Jie Li ebede26ebf
Make InternLM follow rope_scaling in config.json (#1956)
Co-authored-by: lijie8 <lijie8@sensetime.com>
2023-12-07 08:32:08 -08:00
..
core
[FIX] Fix formatting error
2023-11-29 00:40:19 +00:00
engine
Fix num_gpus when TP > 1 (#1852)
2023-12-03 12:24:30 -08:00
entrypoints
add custom server params (#1868)
2023-12-03 12:59:18 -08:00
model_executor
Make InternLM follow rope_scaling in config.json (#1956)
2023-12-07 08:32:08 -08:00
transformers_utils
Fix Baichuan tokenizer error (#1874)
2023-11-30 18:35:50 -08:00
worker
Fix broken sampler tests (#1896)
2023-12-02 16:06:17 -08:00
__init__.py
Bump up to v0.2.3 (#1903)
2023-12-03 12:27:47 -08:00
block.py
[Quality] Add code formatter and linter (#326)
2023-07-03 11:31:55 -07:00
config.py
Refactor Worker & InputMetadata (#1843)
2023-11-29 22:16:37 -08:00
logger.py
[Fix] Fix duplicated logging messages (#1524)
2023-10-31 09:04:47 -07:00
outputs.py
docs: add description (#1553)
2023-11-03 09:14:52 -07:00
py.typed
Add py.typed so consumers of vLLM can get type checking (#1509)
2023-10-30 14:50:47 -07:00
sampling_params.py
add custom server params (#1868)
2023-12-03 12:59:18 -08:00
sequence.py
[FIX] Fix class naming (#1803)
2023-11-28 14:08:01 -08:00
utils.py
[Build] Avoid building too many extensions (#1624)
2023-11-23 16:31:19 -08:00
Powered by Gitea Version: 1.23.1 Page: 636ms Template: 7ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API