Skip to main content

Overview

Tool calling (also called function calling) enables models to request execution of external functions. The model proposes tool calls; your application executes them and returns results to continue the conversation.

How It Works

  1. Model proposes tool call - Model identifies when a tool should be called
  2. Client executes tool - Your application runs the function locally
  3. Return results - Send tool output back in the conversation
  4. Model processes - Model uses results to generate final response

Request Format

Include tools array in your request:
{
  "model": "openai/gpt-5-mini",
  "messages": [
    {"role": "user", "content": "What's the weather in San Francisco?"}
  ],
  "tools": [
    {
      "type": "function",
      "function": {
        "name": "get_weather",
        "description": "Get the current weather in a location",
        "parameters": {
          "type": "object",
          "properties": {
            "location": {
              "type": "string",
              "description": "The city and state"
            },
            "unit": {
              "type": "string",
              "enum": ["celsius", "fahrenheit"]
            }
          },
          "required": ["location"]
        }
      }
    }
  ],
  "tool_choice": "auto"
}

Tool Definition Schema

type Tool = {
  type: "function";
  function: {
    name: string;
    description: string;
    parameters: {
      type: "object";
      properties: Record<string, JSONSchemaProperty>;
      required?: string[];
      additionalProperties?: boolean;
    };
  };
};

Tool Choice

Control when tools are called:
  • "auto": Model decides (default)
  • "none": No tools called
  • "required": Model must call a tool
  • {type: "function", function: {name: string}}: Force specific function

Response Format

When a tool is called, the response includes tool_calls:
{
  "choices": [{
    "message": {
      "role": "assistant",
      "content": null,
      "tool_calls": [{
        "id": "call_abc123",
        "type": "function",
        "function": {
          "name": "get_weather",
          "arguments": "{\"location\": \"San Francisco\"}"
        }
      }]
    },
    "finish_reason": "tool_calls"
  }]
}

Executing Tools

Parse arguments (JSON string) and execute:
import json

tool_call = response["choices"][0]["message"]["tool_calls"][0]
function_name = tool_call["function"]["name"]
arguments = json.loads(tool_call["function"]["arguments"])

# Execute function
if function_name == "get_weather":
    result = get_weather(arguments["location"])

Returning Tool Results

Send results as a tool message:
{
  "model": "openai/gpt-5-mini",
  "messages": [
    {"role": "user", "content": "What's the weather in NYC?"},
    {
      "role": "assistant",
      "content": null,
      "tool_calls": [{
        "id": "call_123",
        "type": "function",
        "function": {
          "name": "get_weather",
          "arguments": "{\"location\": \"New York City\"}"
        }
      }]
    },
    {
      "role": "tool",
      "content": "72°F, sunny",
      "tool_call_id": "call_123"
    }
  ],
  "tools": [...]
}

Parallel Tool Calls

Enable multiple tool calls in one response:
{
  "parallel_tool_calls": true
}
Default: true. When enabled, model can call multiple tools simultaneously.

Multi-Turn Conversations

Maintain conversation history including all tool calls and results:
messages = [
    {"role": "user", "content": "What's 25 * 17?"},
    {
        "role": "assistant",
        "content": None,
        "tool_calls": [{
            "id": "call_1",
            "type": "function",
            "function": {
                "name": "calculator",
                "arguments": "{\"operation\": \"multiply\", \"a\": 25, \"b\": 17}"
            }
        }]
    },
    {
        "role": "tool",
        "content": "425",
        "tool_call_id": "call_1"
    }
]

# Continue conversation
response = client.chat.completions.create(
    model="openai/gpt-5-mini",
    messages=messages,
    tools=[...]
)

Supported Models

View Tool-Calling Models

For a complete list of models that support tool calling, visit anannas.ai/models and filter by capability.
Tool calling is supported on:
  • OpenAI: GPT-4, GPT-3.5 Turbo, GPT-5 Mini, o1, o3
  • Anthropic: Claude 3 Opus, Claude 3 Sonnet, Claude 3 Haiku, Claude Sonnet 4.5
  • Other providers: Check /v1/models for tool_calling capability

Best Practices

  1. Clear descriptions: Write detailed function descriptions
  2. Required fields: Specify required array for critical parameters
  3. Type constraints: Use enum for limited options
  4. Error handling: Handle invalid tool calls gracefully
  5. Conversation history: Include all tool calls and results in subsequent requests

Example: Complete Tool Calling Flow

from openai import OpenAI
import json

client = OpenAI(
    base_url="https://api.anannas.ai/v1",
    api_key="<ANANNAS_API_KEY>"
)

def get_weather(location: str, unit: str = "fahrenheit") -> str:
    # Simulate weather API
    return f"72°F, sunny in {location}"

messages = [
    {"role": "user", "content": "What's the weather in San Francisco?"}
]

tools = [{
    "type": "function",
    "function": {
        "name": "get_weather",
        "description": "Get current weather",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {"type": "string"},
                "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
            },
            "required": ["location"]
        }
    }
}]

# First request
response = client.chat.completions.create(
    model="openai/gpt-5-mini",
    messages=messages,
    tools=tools
)

message = response.choices[0].message
messages.append(message)

# Execute tool
if message.tool_calls:
    for tool_call in message.tool_calls:
        args = json.loads(tool_call.function.arguments)
        result = get_weather(args["location"])
        
        messages.append({
            "role": "tool",
            "content": result,
            "tool_call_id": tool_call.id
        })

# Get final response
response = client.chat.completions.create(
    model="openai/gpt-5-mini",
    messages=messages,
    tools=tools
)

print(response.choices[0].message.content)

See Also