Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ollama model in docker compose #300

Open
yongjer opened this issue Apr 1, 2024 · 5 comments
Open

ollama model in docker compose #300

yongjer opened this issue Apr 1, 2024 · 5 comments

Comments

@yongjer
Copy link

yongjer commented Apr 1, 2024

Describe the bug
cannot find pulled ollama model

To Reproduce
Steps to reproduce the behavior:

  1. docker compose up
  2. attach shell of ollama container in docker-compose
  3. run ollama pull gemma:2b

Expected behavior
able to choose ollama model in UI

Desktop (please complete the following information):

  • OS: ubuntu 2204
  • Browser [e.g. chrome, safari]
  • Version: pull from source on April 2nd

Additional context
Add any other context about the problem here.

@cpAtor
Copy link

cpAtor commented Apr 5, 2024

I am seeing same issue. not seeing anything under ollama when using docker compose up command in the devika root directory.

initially there were no models loaded in the ollama server when I first ran the docker compose up command.
So, I manually loaded the phi model by running ollama pull phi in the ollama container followed by a docker compose restart in the devika root directory.

After this I am able to confirm the presence of phi model in ollama server as shown below
image

Still, not seeing the option in devika UI as shown below.
image

Has anyone been able to run devika using docker?

@Volko61
Copy link

Volko61 commented Apr 7, 2024

Same issue on linux with deepseek coder installed on ollama via the ollama run deepseekcoder.

even tried to restart the container or run ollama on host machine, still don't show up

@RexWzh
Copy link

RexWzh commented Apr 9, 2024

Same for me, using either docker or Python.

ADD: In my case, no model shows in the frontend.

config.toml

❯ cat config.toml
[STORAGE]
SQLITE_DB = "db/devika.db"
SCREENSHOTS_DIR = "screenshots"
PDFS_DIR = "pdfs"
PROJECTS_DIR = "projects"
LOGS_DIR = "logs"
REPOS_DIR = "repos"

[API_KEYS]
BING = "39b..."
OPENAI = "sk-..."


[API_ENDPOINTS]
BING = "https://api.bing.microsoft.com/v7.0/search"
OLLAMA = "http://myollama:11434/v1"
OPENAI = "https://myendpoint/v1"

[LOGGING]
LOG_REST_API = "true"
LOG_PROMPTS = "false"

logs

Python logs:

❯ python devika.py
24.04.09 23:39:18: root: INFO   : Initializing Devika...
24.04.09 23:39:18: root: INFO   : Initializing Prerequisites Jobs...
24.04.09 23:39:20: root: INFO   : Loading sentence-transformer BERT models...
No sentence-transformers model found with name sentence-transformers/all-MiniLM-L6-v2. Creating a new one with MEAN pooling.
24.04.09 23:39:28: root: INFO   : BERT model loaded successfully.
24.04.09 23:39:29: root: INFO   : Ollama available
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
        - Avoid using `tokenizers` before the fork if possible
        - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
        - Avoid using `tokenizers` before the fork if possible
        - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
24.04.09 23:39:30: root: INFO   : Devika is up and running!

Docker logs:

❯ dlog
devika-backend-engine  | 24.04.09 15:40:57: root: ERROR  : Failed to list Ollama models:
devika-backend-engine  | 24.04.09 15:41:04: root: INFO   : Booting up... This may take a few seconds
devika-backend-engine  | 24.04.09 15:41:04: root: INFO   : Initializing Devika...
devika-backend-engine  | 24.04.09 15:41:04: root: INFO   : Initializing Prerequisites Jobs...
devika-backend-engine  | 24.04.09 15:41:04: root: INFO   : Loading sentence-transformer BERT models...
devika-backend-engine  | 24.04.09 15:41:44: root: INFO   : BERT model loaded successfully.
devika-frontend-app    | $ vite dev --host
devika-frontend-app    | Forced re-optimization of dependencies
devika-frontend-app    |
devika-frontend-app    |   VITE v5.2.8  ready in 612 ms
devika-frontend-app    |
devika-backend-engine  |  * Serving Flask app 'devika'
devika-frontend-app    |   ➜  Local:   http://localhost:3000/
devika-backend-engine  |  * Debug mode: off
devika-frontend-app    |   ➜  Network: http://10.10.69.3:3000/
devika-frontend-app    | 3:38:40 PM [vite] ✨ new dependencies optimized: socket.io-client, xterm, xterm-addon-fit, tiktoken/lite
devika-frontend-app    | 3:38:40 PM [vite] ✨ optimized dependencies changed. reloading

screenshot

image

@ARajgor
Copy link
Collaborator

ARajgor commented Apr 16, 2024

So docker files is not fully functional yet. You can try without running docker. If ollama is running in your system it should pick it up.

@RexWzh
Copy link

RexWzh commented Apr 16, 2024

I think the primary challenge is with the backend services.
It works when connecting locally via localhost, but can run into cross-origin problems when going through a domain name.
Generally, the backend services should be designed to be accessed using URIs, for example by facilitating backend connections through endpoints like /api.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants