Skip to main content
Tool calling (also known as function calling) lets models request to execute functions you define. The model decides when to call a tool and what arguments to pass — your application executes the function and returns the result.
Tool calling requires the Basic plan or higher. Check shuttleai.com/models to see which models support tool calling.

How it works

1

Define your tools

Describe the functions the model can call, including their parameters.
2

Send the request

Include the tool definitions in your API request alongside the conversation.
3

Model decides to call a tool

If the model determines it needs to use a tool, it responds with a tool_calls array instead of a regular message.
4

Execute the function

Your application runs the requested function with the provided arguments.
5

Return the result

Send the function result back to the model as a tool message.
6

Model generates final response

The model incorporates the function result and produces a final answer.

Example: Weather lookup

1. Define the tool

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get the current weather for a location.",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "City name, e.g. 'San Francisco'"
                    },
                    "unit": {
                        "type": "string",
                        "enum": ["celsius", "fahrenheit"],
                        "description": "Temperature unit"
                    }
                },
                "required": ["location"]
            }
        }
    }
]

2. Send the request

from openai import OpenAI

client = OpenAI(
    api_key="shuttle-xxx",
    base_url="https://api.shuttleai.com/v1"
)

messages = [{"role": "user", "content": "What's the weather in Tokyo?"}]

response = client.chat.completions.create(
    model="shuttleai/auto",
    messages=messages,
    tools=tools,
    tool_choice="auto"
)

3. Handle the tool call

message = response.choices[0].message

if message.tool_calls:
    for tool_call in message.tool_calls:
        if tool_call.function.name == "get_weather":
            import json
            args = json.loads(tool_call.function.arguments)
            
            # Your function — call a real weather API here
            weather_result = get_weather(args["location"], args.get("unit", "celsius"))
            
            # Add the assistant's tool call message
            messages.append(message)
            
            # Add the tool result
            messages.append({
                "role": "tool",
                "tool_call_id": tool_call.id,
                "content": json.dumps(weather_result)
            })

    # Get the final response
    final_response = client.chat.completions.create(
        model="shuttleai/auto",
        messages=messages,
        tools=tools
    )
    
    print(final_response.choices[0].message.content)

Full example

Here’s the complete flow in one script:
import json
from openai import OpenAI

client = OpenAI(
    api_key="shuttle-xxx",
    base_url="https://api.shuttleai.com/v1"
)

# Define tools
tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get the current weather for a location.",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {"type": "string", "description": "City name"},
                    "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
                },
                "required": ["location"]
            }
        }
    }
]

# Simulated function
def get_weather(location, unit="celsius"):
    return {"location": location, "temperature": 22, "unit": unit, "condition": "sunny"}

# Start conversation
messages = [{"role": "user", "content": "What's the weather like in Tokyo?"}]

response = client.chat.completions.create(
    model="shuttleai/auto",
    messages=messages,
    tools=tools,
    tool_choice="auto"
)

message = response.choices[0].message

# Process tool calls
if message.tool_calls:
    messages.append(message)
    
    for tool_call in message.tool_calls:
        args = json.loads(tool_call.function.arguments)
        result = get_weather(**args)
        
        messages.append({
            "role": "tool",
            "tool_call_id": tool_call.id,
            "content": json.dumps(result)
        })

    # Get final answer
    response = client.chat.completions.create(
        model="shuttleai/auto",
        messages=messages,
        tools=tools
    )

print(response.choices[0].message.content)

Parallel tool calls

Models can request multiple tool calls in a single response. For example, if the user asks “What’s the weather in Tokyo and New York?”, the model may return two tool calls at once. Always iterate over the full tool_calls array and return results for all of them before sending the next request.

Tool choice

Control when the model uses tools with tool_choice:
ValueBehavior
"auto"Model decides whether to call a tool (default)
"none"Model will never call tools
"required"Model must call at least one tool
{"type": "function", "function": {"name": "..."}}Force a specific tool

Streamed tool calls

Tool calls work with streaming too. When streaming, tool call arguments arrive incrementally across chunks. The OpenAI SDK handles this automatically.

Supported models

Tool calling is supported on models that have the Tools badge at shuttleai.com/models, including:
  • ShuttleAI Auto
  • GPT-5.2
  • Claude Opus 4.6
  • Claude Sonnet 4.6
  • Claude Haiku 4.5