Skip to content

Commit c92ac07

Browse files
committed
README: Update the recommendation for local models
1 parent 4b97567 commit c92ac07

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

README.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -37,9 +37,9 @@ echo "Translate into German: thank you" | ./ask-llm.py
3737

3838
Supported local LLM servers include [llama.cpp](https://github.com/ggerganov/llama.cpp), [Jan](https://jan.ai), [Ollama](https://ollama.com), and [LocalAI](https://localai.io).
3939

40-
To utilize [llama.cpp](https://github.com/ggerganov/llama.cpp) locally with its inference engine, ensure to load a quantized model such as [Phi-3 Mini](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf), [LLama-3 8B](https://huggingface.co/QuantFactory/Meta-Llama-3-8B-Instruct-GGUF), or [OpenHermes 2.5](https://huggingface.co/TheBloke/OpenHermes-2.5-Mistral-7B-GGUF). Adjust the environment variable `LLM_API_BASE_URL` accordingly:
40+
To utilize [llama.cpp](https://github.com/ggerganov/llama.cpp) locally with its inference engine, ensure to load a quantized model such as [Phi-3.5 Mini](https://huggingface.co/bartowski/Phi-3.5-mini-instruct-GGUF), or [Llama-3.1 8B](https://huggingface.co/lmstudio-community/Meta-Llama-3.1-8B-Instruct-GGUF). Adjust the environment variable `LLM_API_BASE_URL` accordingly:
4141
```bash
42-
/path/to/llama.cpp/server -m Phi-3-mini-4k-instruct-q4.gguf
42+
/path/to/llama-server -m Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf
4343
export LLM_API_BASE_URL=http://127.0.0.1:8080/v1
4444
```
4545

@@ -51,9 +51,9 @@ export LLM_CHAT_MODEL='llama3-8b-instruct'
5151

5252
To use [Ollama](https://ollama.com) locally, load a model and configure the environment variable `LLM_API_BASE_URL`:
5353
```bash
54-
ollama pull phi3
54+
ollama pull phi3.5
5555
export LLM_API_BASE_URL=http://127.0.0.1:11434/v1
56-
export LLM_CHAT_MODEL='phi3'
56+
export LLM_CHAT_MODEL='phi3.5'
5757
```
5858

5959
For [LocalAI](https://localai.io), initiate its container and adjust the environment variable `LLM_API_BASE_URL`:

0 commit comments

Comments
 (0)