Tool Calling is a great way to introduce ShuttleAI into your project seamlessly.
Tool calling gives you the ability to give the model certain information about a function and the model will choose how to use that information to generate a function-ready formatted respnse for you automatically.
While not all models supports tool-calling, both official models, shuttle-3
and shuttle-3-mini
, support it even during streaming.
What is a Tool?
Currently there is only one type
of tool, a “function”; this is subject to change in the future.
A “function” lets you give certain information of your function and its requiremnts, and then receive a formatted response ready for that function.
How should a Tool look?
Below we have a basic tool utilizing our get_current_weather
function
tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
},
"required": ["location"],
},
},
}
]
Lets also make the function that we will call as well:
Keep in mind this is a basic example implementation and you would need to use your own weather API to get accurate results yourself.
import random
def get_current_weather(location: str, unit: str = "fahrenheit"):
""" Get the current weather in a given location. """
weather_data = {
"tokyo": (75, 85),
"san francisco": (55, 65),
"paris": (65, 75)
}
location = location.lower()
if location in weather_data:
temp_range = weather_data[location]
temperature = random.randint(*temp_range)
else:
temperature = "unknown"
return {
"location": location.title(),
"temperature": temperature
}
Now that we have our tools and functions set up, lets make our request:
import asyncio
from shuttleai import AsyncShuttleAI
async def main():
shuttleai = AsyncShuttleAI()
response = await shuttleai.chat.completions.create(
model="shuttle-3",
messages=[{"role": "user", "content": "whats weather in china?"}],
tools=tools
)
print(response)
asyncio.run(main())
This will result in something similar to the following:
{
"id": "chatcmpl-1ae97f622b0e4438a5cb20d5b8e4cb43",
"object": "chat.completion",
"created": 1730230592,
"model": "shuttle-3",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": null,
"tool_calls": [
{
"id": "call_c93e703c551547dd9b461a9f818a81ea",
"type": "function",
"function": {
"name": "get_current_weather",
"arguments": "{\"location\":\"France\",\"unit\":\"celsius\"}"
}
}
]
},
"finish_reason": "tool_calls"
}
],
"usage": {
"prompt_tokens": 52,
"completion_tokens": 35,
"total_charged": 0.000050499999999999994
}
}
With this response, we can easily use Python’s built in json
module with import json
and using json.loads(response['choices'][0]['message']['tool_calls']['function']['arguments'])
to get the required properties for your function.
ShuttleAI supports both Parallel Tool Calling and Streamed Tool Calling!
For a full example using Tools, check out Web Access!