In conversations about AI agent integrations, three terms keep showing up that often get used as if they were synonyms. Function Calling, Tool Use, and Model Context Protocol sound so close that articles and Slack channels routinely mix them up. They actually describe three different concepts on three different layers.
Anyone planning an AI integration runs into this terminology question sooner or later. The choice between Function Calling, Tool Use, and MCP isn’t arbitrary. Each one carries a different contract with a different vendor and a different portability profile.
Function Calling is OpenAI’s API mechanism for letting a model emit structured function calls inside a conversation. Tool Use is Anthropic’s term for the same basic idea in Claude. MCP (Model Context Protocol) sits one layer below. It’s an open standard for serving tools from external servers, independent of the agent vendor. Function Calling and Tool Use are vendor-specific APIs; MCP is a vendor-neutral protocol.
Three terms, three layers
The three terms don’t sit at the same level of abstraction. Treating them as interchangeable alternatives misses the difference between a vendor API and an open protocol.
Function Calling and Tool Use are both model-API features. They describe how a specific language model formulates structured calls inside a conversation. Function Calling comes from OpenAI and shipped in summer 2023. Tool Use is Anthropic’s equivalent in Claude, which followed shortly after. The two are technically very similar but syntactically different, since each vendor defines its own JSON schema.
MCP is a protocol layer on top of that. It doesn’t define how a model formulates tool calls (the vendor still does that), but how tools get served from external servers and consumed. An MCP server is vendor-neutral. It works with Claude, with ChatGPT, with Cursor, and with any other agent that supports the protocol.
For a grounding in MCP basics, see What is MCP?.
If you want to keep the three terms straight over the long haul, a simple question helps. Function Calling answers "How does OpenAI do this?", Tool Use answers "How does Anthropic do this?", and MCP answers "How do you do this in a vendor-neutral way?". The first two are vendor answers to the same underlying problem; the third is the open spec that every vendor can implement against.
Function Calling as the OpenAI pattern
OpenAI introduced Function Calling as an API feature in June 2023. The basic idea is that instead of a free-form answer, a language model returns a structured JSON object that represents a function call. The application executes the call, returns the result, and the model continues the conversation.
Technically, it works like this. The developer hands the OpenAI API a list of function definitions with a name, a description, and a parameter schema. Based on the conversation, the model decides whether to call one of those functions and returns a call object.
{
"function_call": {
"name": "getCustomerOrders",
"arguments": "{\"customerId\": \"abc-123\", \"limit\": 10}"
}
}The upside is pragmatism. Function Calling drops into an existing application in a few lines of code, with no separate servers or protocol implementations.
The downside is lock-in. A Function-Calling integration is tied to OpenAI. Switching models means rewriting the tool layer. Running multiple models in parallel means maintaining multiple integration paths. That was acceptable in 2023, when OpenAI was effectively the only vendor in town. In 2026, it increasingly feels like dead weight.
In practice, OpenAI has extended the Function-Calling API several times without fundamentally changing it. parallel_tool_calls for multiple simultaneous function calls landed in 2024, and strict_mode for strict schema validation followed soon after. These extensions are useful, but they only deepen the lock-in, because they don’t carry over directly to other vendors.
Tool Use as the Anthropic pattern
Tool Use is Anthropic’s term for the same basic idea in Claude. The mechanics are similar, but the details diverge in ways that tend to create friction in practice.
Anthropic introduced Tool Use as an API feature in spring 2024. Based on the conversation, the model decides whether to call a tool and returns a structured tool-use object. The application runs the tool and returns the result, and Claude picks the conversation back up.
The syntactic differences from OpenAI’s Function Calling are small but real. Anthropic uses tool_use objects with name, input, and id, while OpenAI uses function_call with name and arguments. Anyone building a tool layer for both vendors ends up maintaining two parallel representations.
Conceptually, Function Calling and Tool Use share more than they don’t. Both are vendor-specific APIs for the same basic concept (structured function calls inside a conversation). Both suffer from the same lock-in problem. Without MCP, neither is portable across models.
Anthropic also has a few extensions of its own that have no direct equivalent in OpenAI’s Function Calling. tool_choice with options like any and auto controls whether Claude is allowed to call a tool at all, or must pick one from a specific list. Details like this are a reminder that even between the two market leaders, subtle differences in tool logic linger. A shared codebase has to absorb them.
We ran a pilot where we ported an existing Function-Calling integration onto Claude’s Tool Use in parallel to keep the model choice open. The mechanical effort per tool was about an hour. What took noticeably longer was reworking the prompt structures, because the two models react differently to tool descriptions. After three weeks of running both in parallel, we migrated to MCP, which collapses both model paths into a single tool definition.
MCP as a vendor-neutral protocol layer
MCP starts one layer down. Instead of a model-specific API, it defines a protocol that runs between agents and tool servers. An MCP server provides tools; an MCP client (typically an agent) consumes them. Which language model runs underneath is irrelevant for the tool definition.
Separating tool delivery from model invocation has another effect that tends to surface only on the second iteration. Tools become standalone artifacts with their own lifecycle. They have their own versions, their own audit logs, their own permissions, and their own maintenance owners. What used to sit tangled inside the application codebase now becomes a dedicated layer. It can be operated independently of the consuming agent and independently of the underlying model.
The protocol layer solves two problems that Function Calling and Tool Use leave open.
Portability. An MCP tool works with Claude, ChatGPT, Cursor, and any other agent that supports the protocol. Switching models leaves the tool layer untouched.
Externalization. Tools run in their own server process, not embedded inside the application that uses the agent. That makes tools reusable across applications and enables central maintenance.
What MCP doesn’t do is change the layer inside the conversation itself. When the agent calls a tool, the call mechanic still goes through Function Calling or Tool Use, depending on the model. MCP only handles the connection to the external tool server and the protocol handling. The model API beneath it stays vendor-specific.
That clears up a question that often gets asked the wrong way around. The question isn’t whether a team should use Function Calling or MCP. In practice, both show up. Function Calling or Tool Use is the model’s call mechanic; MCP is the mediation layer to the tool server. Anyone building a modern agent architecture usually ends up with both in the stack, one per layer.
When each approach is worth it
The choice between the three approaches follows a simple logic that tends to hold up in practice.
| Situation | Recommendation | Reasoning |
|---|---|---|
| Quick prototype with a single model | Function Calling or Tool Use directly | Low effort, no protocol investment needed |
| Multiple models in parallel | MCP as the protocol layer | One tool layer instead of N vendor-specific paths |
| Keep the option to switch models open | MCP from day one | Avoids migration work later |
| Tools used across multiple applications | MCP as a server architecture | Central maintenance, no duplication |
For enterprise setups, the answer is usually MCP. The upfront effort is higher than a direct Function-Calling integration, but the lock-in protection and reusability tend to pay off within a few months.
For prototypes and small integrations, Function Calling or Tool Use is still the more pragmatic choice. A single tool for a single application running against a fixed model doesn’t need the protocol layer.
In practice, we tend to see three typical maturity stages. The first stage is a prototype integration where Function Calling runs directly out of the application against a single model. The second stage shows up when the team realizes the same tools are needed in multiple applications, so the tool code gets pulled out into its own service, often still on the Function-Calling API. The third stage is the move to MCP as a protocol layer. That happens either because a model switch is on the horizon, or because the number of tool consumers has grown to the point where reuse matters more than time to first call.
The rule of thumb from the field is this. Once more than one model or more than one application is in play, MCP starts to pay off. For a single application against a fixed model, Function Calling is still the more direct choice.
Migration paths between the approaches
Anyone who started on Function Calling or Tool Use and wants to migrate to MCP has a clear path.
Existing function definitions get translated into MCP tools. Mechanically, that part is trivial, since tool schemas are JSON Schema in both worlds. The real work sits in the server architecture, not in the tool mechanics.
Existing applications get switched over to an MCP client. Instead of passing Function-Calling definitions directly to the model API, the application asks the MCP server for available tools and passes those definitions on. The model API underneath stays the same.
A migration from Function Calling or Tool Use to MCP that skips a schema review tends to inherit the usual weaknesses. Tool descriptions optimized for one specific model don’t automatically become agent-neutral once they land in MCP. The migration is the moment to rewrite descriptions from the agent’s perspective, not the model’s.
In most cases, a migration runs about two weeks per tool family. What takes longer is the organizational question of who owns the tool layer going forward, since it splits off from the application code and picks up its own lifecycle.
A second migration path is worth considering for teams that aren’t running Function Calling or Tool Use in production yet. They can skip the vendor-specific stage and start straight on MCP. The upfront effort is higher, but they avoid the later migration entirely. This route fits especially well for greenfield projects, or for teams already building an API platform that treats MCP as a first-class piece.
On the versioning side, it’s worth remembering that MCP itself keeps evolving. The auth profiles, resource definitions, and streaming pieces have been extended several times since November 2024. Anyone planning a migration should first check which MCP version the target server needs to support, and line up with the current spec versions to avoid building on top of an outdated one.
How api-portal.io supports the three approaches
In api-portal.io, tools are served primarily through MCP, with automatic generation from OpenAPI specs. Existing Function-Calling or Tool-Use integrations keep running through adapter layers, while the underlying tool definition is already MCP-compliant. That keeps model switches and multi-application reuse on the table without rewriting tool schemas.
The MCP Server handles the connection between APIs and MCP-capable clients.