Skip to content

Commit

Permalink
update llama 3.3 tool-calling syntax (#274)
Browse files Browse the repository at this point in the history
* update llama 3.3 tool-calling syntax

* Addressing review comments in typo
  • Loading branch information
ShishirPatil authored Mar 1, 2025
1 parent e3a5fc5 commit ee8ede3
Showing 1 changed file with 123 additions and 2 deletions.
125 changes: 123 additions & 2 deletions models/llama3_3/prompt_format.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,14 +72,135 @@ Here's my response

## Tool Calling Formats

Here we describe how to invoke the Llama 3.3 instruction tuned model for tool-calling (also called function-calling). We recommend zero-shot function calling over built-in tools.

### Zero shot function calling


For Llama3.3 70B instruct models, we are continuing the format for zero shot function calling, introduced in llama 3.2.
This format is designed to be more flexible and powerful than the format in 3.1.
All available functions can be provided in the system message.

Here is an example for the same,


##### Input Prompt Format
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are an expert in composing functions. You are given a question and a set of possible functions.
Based on the question, you will need to make one or more function/tool calls to achieve the purpose.
If none of the function can be used, point it out. If the given question lacks the parameters required by the function,
also point it out. You should only return the function call in tools call sections.
If you decide to invoke any of the function(s), you MUST put it in the format of [func_name1(params_name1=params_value1, params_name2=params_value2...), func_name2(params)]
You SHOULD NOT include any other text in the response.
Here is a list of functions in JSON format that you can invoke.
[
{
"name": "get_weather",
"description": "Get weather info for places",
"parameters": {
"type": "dict",
"required": [
"city"
],
"properties": {
"city": {
"type": "string",
"description": "The name of the city to get the weather for"
},
"metric": {
"type": "string",
"description": "The metric for weather. Options are: celsius, fahrenheit",
"default": "celsius"
}
}
}
}
]<|eot_id|><|start_header_id|>user<|end_header_id|>
What is the weather in SF and Seattle?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```

##### Model Response Format
```
[get_weather(city='San Francisco', metric='celsius'), get_weather(city='Seattle', metric='celsius')]<|eot_id|>
```


##### Notes

- The output supports multiple, and parallel tool calls natively
- JSON format for defining the functions in the system prompt is similar to Llama3.1


### Zero shot function calling with user message


While the default is to provide all function calls in a system message, in Llama3.3 model you can also provide information for all the available tools in a user message.


##### Input Prompt Format
```
<|begin_of_text|><|start_header_id|>user<|end_header_id|>
Questions: Can you retrieve the details for the user with the ID 7890, who has black as their special request?
Here is a list of functions in JSON format that you can invoke:
[
{
"name": "get_user_info",
"description": "Retrieve details for a specific user by their unique identifier. Note that the provided function is in Python 3 syntax.",
"parameters": {
"type": "dict",
"required": [
"user_id"
],
"properties": {
"user_id": {
"type": "integer",
"description": "The unique identifier of the user. It is used to fetch the specific user details from the database."
},
"special": {
"type": "string",
"description": "Any special information or parameters that need to be considered while fetching user details.",
"default": "none"
}
}
}
}
]
Should you decide to return the function call(s), put them in the format of [func1(params_name=params_value, params_name2=params_value2...), func2(params)]
You SHOULD NOT include any other text in the response.<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```

##### Model Response Format
```
[get_user_info(user_id=7890, special='black')]<|eot_id|>
```


##### Notes

- The tool call format for the model is the same whether your function calls are provided in the system or user message.
- While builtin tool calls end with a <|eom_id|>, notice the <|eot_id|> for zero shot tool calls.


### Builtin Tool Calling

The three built-in tools (brave_search, wolfram_alpha, and code interpreter) can be turned on using the system prompt:
- Brave Search: Tool call to perform web searches.
- Wolfram Alpha: Tool call to perform complex mathematical calculations.
- Code Interpreter: Enables the model to output python code.

## Builtin Tool Calling


Here is an example of a conversation using brave search

Expand Down

0 comments on commit ee8ede3

Please sign in to comment.