-
-
Notifications
You must be signed in to change notification settings - Fork 246
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for o3-mini reasoning models #75
Comments
mbcrawfo
pushed a commit
to mbcrawfo/atomic-agents
that referenced
this issue
Feb 10, 2025
mbcrawfo
pushed a commit
to mbcrawfo/atomic-agents
that referenced
this issue
Feb 10, 2025
mbcrawfo
pushed a commit
to mbcrawfo/atomic-agents
that referenced
this issue
Feb 10, 2025
mbcrawfo
pushed a commit
to mbcrawfo/atomic-agents
that referenced
this issue
Feb 10, 2025
mbcrawfo
added a commit
to mbcrawfo/atomic-agents
that referenced
this issue
Feb 10, 2025
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi all,
I wanted to try out o3-mini as a model, but the parameters have changed it a bit on openai's side, instructor supports them, but currently the library doesn't give much flexibility in terms of what parameters we would like to send to the model.
Here is the problematic code, max_tokens and temperature are not supported by o3-mini, but there are other parameters instead "max_completion_tokens" and "reasoning_effort="high""
We could simple add these to the "BaseAgentConfig", but that would require a code change every time some parameters change on the api.
A better solution might be to add an dict to the BaseAgentConfig "model_api_parameters" or so and just **kwargs them into instructor. This would be a breaking change though unless we leave temperature and max_tokens untouched.
The text was updated successfully, but these errors were encountered: