
Tenant Data Isolation: Patterns and Anti-Patterns
Explore effective patterns and pitfalls of tenant data isolation in multi-tenant systems to enhance security and compliance.
Jul 30, 2025
Read More
Large Language Models (LLMs) are incredibly powerful, but their knowledge is frozen in time and they can't interact with the outside world. To build true AI agents that can solve complex problems, we need to give them access to tools. This is where function calling comes in.
Tool use, in the context of AI, is the ability for a model to utilize external resources to accomplish a task. This could be anything from searching the web for up-to-date information to executing a piece of code or calling an API. Function calling is the mechanism that enables this interaction.
This guide will provide a comprehensive overview of building AI agents with tool use and function calling. We'll cover the fundamental concepts, explore the different approaches taken by major AI players like OpenAI, Anthropic, and Google, and dive into the practical aspects of building robust and reliable tool-using agents.
At its core, function calling allows a developer to define a set of custom functions that an LLM can choose to execute during a conversation. Instead of just generating text, the model can output a structured JSON object containing the name of a function to call and the arguments to pass to it.
This is a significant leap forward from simple text-in, text-out models. It transforms the LLM from a passive information generator into an active participant that can reason about when and how to use external tools to achieve a goal.
The general workflow for function calling is as follows:
This loop can be repeated multiple times, allowing the agent to chain together multiple tool calls to solve complex problems.
While the core concept is the same, the major AI providers have slightly different implementations of function calling.
| Feature | OpenAI | Anthropic | Google (Vertex AI) |
|---|---|---|---|
| Tool Definition | JSON Schema | Custom XML-like format or JSON | OpenAPI v3 Specification |
| Invocation | tool_calls in response |
tool_use content block |
function_call in response |
| Parallel Calls | Supported | Supported | Supported |
| Streaming | Supported | Supported | Supported |
OpenAI's approach is widely considered the industry standard and is well-documented. It uses JSON Schema to define functions, which provides a familiar and powerful way to describe complex data structures.
Anthropic's implementation is similar but uses a custom XML-like syntax for tool definitions in its earlier models, though recent updates have added JSON support. It emphasizes a conversational approach where the model and user can collaborate on tool use.
Google's Vertex AI leverages the OpenAPI v3 specification for defining tools. This is a powerful choice for developers already using OpenAPI for their APIs, as it allows them to seamlessly integrate their existing services with AI agents.
Tool-using agents can be equipped with a wide range of capabilities. Here are some of the most common categories:
There are several common architectural patterns for building tool-using agents:
"The future of AI is not a single, monolithic model, but a swarm of specialized agents, each an expert in its domain, collaborating to solve problems beyond the scope of any single mind." – Chris Dixon
Building reliable tool-using agents requires careful consideration of error handling and safety.
Tool calling is a powerful technique, but it's not always the best solution. Here's a quick comparison:
At Propelius, we've been at the forefront of building custom AI agents for our clients. We've developed a range of solutions, from simple chatbots that can answer questions about a company's products to complex, multi-agent systems that can automate entire business processes.
Our team has extensive experience with all the major function calling implementations and can help you choose the right tools and architecture for your specific needs. We also place a strong emphasis on building robust and reliable agents that can handle errors gracefully and operate safely and securely. Read more about our work on our blog.
Tool use is the general concept of an AI model using external resources, while function calling is the specific mechanism that enables this interaction.
Yes, you can define multiple functions and the LLM will choose the most appropriate one to call based on the user's prompt.
You should store API keys and other sensitive credentials securely and not include them directly in your prompts. The function that calls the API should be responsible for retrieving and using the necessary credentials.
Need an expert team to provide digital solutions for your business?
Book A Free CallDive into a wealth of knowledge with our unique articles and resources. Stay informed about the latest trends and best practices in the tech industry.
View All articlesTell us about your vision. We'll respond within 24 hours with a free AI-powered estimate.
© 2026 Propelius Technologies. All rights reserved.