Skip to content

Commit b70c009

Browse files
committed
README: LM Studio
1 parent 55d860d commit b70c009

File tree

1 file changed

+7
-2
lines changed

1 file changed

+7
-2
lines changed

README.md

+7-2
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ It is available in several flavors:
1111
* Clojure version. Compatible with [Babashka](https://babashka.org/) (>= 1.3).
1212
* Go version. Compatible with [Go](https://golang.org), v1.19 or higher.
1313

14-
Ask LLM is compatible with either a cloud-based (managed) LLM service (e.g. [OpenAI GPT model](https://platform.openai.com/docs), [Groq](https://groq.com), [OpenRouter](https://openrouter.ai), etc) or with a locally hosted LLM server (e.g. [llama.cpp](https://github.com/ggerganov/llama.cpp), [LocalAI](https://localai.io), [Ollama](https://ollama.com), etc). Please continue reading for detailed instructions.
14+
Ask LLM is compatible with either a cloud-based (managed) LLM service (e.g. [OpenAI GPT model](https://platform.openai.com/docs), [Groq](https://groq.com), [OpenRouter](https://openrouter.ai), etc) or with a locally hosted LLM server (e.g. [llama.cpp](https://github.com/ggerganov/llama.cpp), [LM Studio](https://lmstudio.ai), [Ollama](https://ollama.com), etc). Please continue reading for detailed instructions.
1515

1616
Interact with the LLM with:
1717
```bash
@@ -33,7 +33,7 @@ echo "Translate into German: thank you" | ./ask-llm.py
3333

3434
## Using Local LLM Servers
3535

36-
Supported local LLM servers include [llama.cpp](https://github.com/ggerganov/llama.cpp), [Jan](https://jan.ai), [Ollama](https://ollama.com), and [LocalAI](https://localai.io).
36+
Supported local LLM servers include [llama.cpp](https://github.com/ggerganov/llama.cpp), [Jan](https://jan.ai), [Ollama](https://ollama.com), [LocalAI](https://localai.io), and [LM Studio](https://lmstudio.ai).
3737

3838
To utilize [llama.cpp](https://github.com/ggerganov/llama.cpp) locally with its inference engine, ensure to load a quantized model such as [Phi-3.5 Mini](https://huggingface.co/bartowski/Phi-3.5-mini-instruct-GGUF), or [Llama-3.1 8B](https://huggingface.co/lmstudio-community/Meta-Llama-3.1-8B-Instruct-GGUF). Adjust the environment variable `LLM_API_BASE_URL` accordingly:
3939
```bash
@@ -60,6 +60,11 @@ docker run -ti -p 8080:8080 localai/localai tinyllama-chat
6060
export LLM_API_BASE_URL=http://localhost:3928/v1
6161
```
6262

63+
For [LM Studio](https://lmstudio.ai), search for and download a model. Next, go to the Developer tab, select the model to load, and click the Start Server button. Then, set the `LLM_API_BASE_URL` environment variable, noting that the server by default runs on port `1234`:
64+
```bash
65+
export LLM_API_BASE_URL=http://127.0.0.1:1234/v1
66+
```
67+
6368
## Using Managed LLM Services
6469

6570
[![Test on AI21](https://github.com/ariya/ask-llm/actions/workflows/test-ai21.yml/badge.svg)](https://github.com/ariya/ask-llm/actions/workflows/test-ai21.yml)

0 commit comments

Comments
 (0)