Replies: 1 comment
-
Hey 👋, Thanks for trying it out! That demo is developed as a custom module for Harbor Boost which is an optimising LLM proxy. It can be plugged into any OpenAI-compatible API. You don't have to use Harbor to use Boost, but it connects really nicely with it as it's developed as a part of the project. I've added a minimal example for standalone Boost usage here: I do not plan on making a function or a pipe for the Open WebUI, since there's no way to achieve a similar functionality within that framework (no way to communicate with artifact from inside of the inference workflow). |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
You mentioned this in reddit post , so i had some doubts - Do we have to connect with harbor to be able to utilize it or you have developed openwebui function as well which we can easily install in openwebui and get this with any other openai compatible api project like litellm ?
if we can connect with harbor then how to use this is also something i would like to understand.
Thankyou :-)
Beta Was this translation helpful? Give feedback.
All reactions