vllm/README.md
Woosuk Kwon 80a2f812f1
Implement LLaMA (#9)
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
2023-03-30 12:25:32 +08:00

1022 B

CacheFlow

Installation

pip install psutil numpy ray torch
pip install git+https://github.com/huggingface/transformers  # Required for LLaMA.
pip install sentencepiece  # Required for LlamaTokenizer.
pip install flash-attn  # This may take up to 20 mins.
pip install -e .

Test simple server

ray start --head
python simple_server.py

The detailed arguments for simple_server.py can be found by:

python simple_server.py --help

FastAPI server

Install the following additional dependencies:

pip install fastapi uvicorn

To start the server:

ray start --head
python -m cacheflow.http_frontend.fastapi_frontend

To test the server:

python -m cacheflow.http_frontend.test_cli_client

Gradio web server

Install the following additional dependencies:

pip install gradio

Start the server:

python -m cacheflow.http_frontend.fastapi_frontend
# At another terminal
python -m cacheflow.http_frontend.gradio_webserver