Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

instructions on how to load in a localhost:11343 Ollama model #43

Open
Mikewhodat opened this issue Mar 1, 2025 · 1 comment
Open

Comments

@Mikewhodat
Copy link

Contexts I've built the image pushed the image to my repo, so I could run the container from Dockge.
Ran the container in bind 0.0.0.0 on host. it worked What I mean is the container is running and is accessible from my host well really from my network Ollama is configured in a similar manner as well.

how ever when I go to the selection window to "prompt Setting" selection window it is not listed now I did copy the .env file left as default as a localhost connection should be possible

I then ran the compose file, with it using my hosts network so a local connection to ollama should be possible or am I incorrect with this?

The

YAML file configuration
version: "3.8"
services:
ollama-deep-researcher:
image: mikewho90/ollama-deep-researcher:v1
network_mode: host
environment:
SEARCH_API: tavily
TAVILY_API_KEY: tvly- "obfuscated"
OLLAMA_BASE_URL: http://127.0.0.1:11434/
OLLAMA_MODEL: llama3.2

Not relevant to the issue but worth looking in to?
another option would be to rebuild the dockerfile make an edit to it and hard code my ip into the .env or to add a RUN cmd to install ollama and jupyter labs into the container a preference of mine and run ollama serve from the terminal now I am aware that I could
"docker exec -it" in to the container but I like UI of the jupyter lab. other option use the sh terminal from Dockge but the UI leaves a bit to be desired

@gschmutz
Copy link
Contributor

gschmutz commented Mar 5, 2025

@Mikewhodat if you run it as a docker container, then 127.0.0.1 is resolved inside the docker container and refers to the container itself. I'm not sure if using network_mode: host will help you, I'm never using this option.
The way I'm running it with Ollama installed directly on my Mac is either by using the IP address of my machine or host.docker.internal

services:
  ollama-deep-researcher:
    image: ghcr.io/gschmutz/ollama-deep-researcher:latest
    container_name: ollama-deep-researcher
    hostname: ollama-deep-researcher
    labels:
      com.platys.name: ollama-deep-researcher
      com.platys.description: Local web research and report writing assistant
      com.platys.webui.title: Ollama Deep Researcher UI
      com.platys.webui.url: https://smith.langchain.com/studio/thread?baseUrl=http://localhost:2024
    ports:
      - 2024:2024
    extra_hosts:
      - host.docker.internal:host-gateway
    environment:
      - SEARCH_API=tavily
      - TAVILY_API_KEY=tvl-xxxxxxx
      - OLLAMA_BASE_URL=http://host.docker.internal:11434
      - OLLAMA_MODEL=mistral
      - MAX_WEB_RESEARCH_LOOPS=3
      - FETCH_FULL_PAGE=false
      - LANGSMITH_API_KEY=lsv2_xxxxxx
    restart: unless-stopped
    healthcheck:
      test: wget --no-verbose --tries=3 --spider http://localhost:2024 || exit 1
      interval: 5s
      timeout: 10s
      retries: 5
      start_period: 10s

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants