Since early 2025, a new question has been landing on backend and API teams: how do you expose existing APIs so AI agents like Claude, Cursor, or ChatGPT can actually use them? One of the more important answers is MCP, the Model Context Protocol. Anthropic published it as an open standard in November 2024, and within a few months it had quietly become the de-facto way to connect tools and data sources to AI agents.
In 2026, anyone tasked with wiring AI agents into an existing API landscape will run into MCP almost immediately. For most enterprise teams, the question is no longer whether MCP matters. It’s which APIs to expose, how deep the integration should go, and which governance rules need to be in place before anything ships.
MCP, the Model Context Protocol, is an open standard from Anthropic, available since November 2024. It defines how AI agents discover and use tools, resources, and prompts from external servers. Claude, ChatGPT, Cursor, and others support the protocol. For enterprise APIs, MCP introduces a second spec layer: a deliberate way to expose selected API functions to AI agents without touching the underlying REST endpoints.
What MCP is and why it exists
The Model Context Protocol is a communication standard between AI agents and external tools, data sources, or APIs. Anthropic published MCP as an open-source spec in November 2024 to fix a problem that had so far been solved mostly in proprietary ways. Every AI agent vendor shipped its own tool-integration interface, which made portability painful.
Before MCP, two approaches dominated the field. Function Calling, introduced by OpenAI in 2023, made tool invocations possible inside a single API conversation. It was pragmatic and good enough for many use cases, but it stayed locked to one vendor. Plugins, briefly offered by ChatGPT, followed a similar idea and were just as proprietary.
MCP flips that logic. Instead of building a separate tool interface for every agent vendor, MCP defines an open standard that both agents and tool providers can implement against. Build a tool once for MCP and it works with Claude, ChatGPT, Cursor, and any other agent that speaks the protocol.
Adoption moved unusually fast. Within months of release, most major vendors had either integrated MCP or announced plans to. Microsoft, Google, Cursor, and OpenAI were among the early adopters. MCP had already become a de-facto standard before formal standards bodies even showed up.
One reason for the speed is how familiar MCP feels. There are no radically new modeling patterns. Tools are essentially functions with input and output schemas, resources are named data points, and the JSON-RPC-based transport has been around for years. If you know OpenAPI, you’ll usually find your bearings in MCP quickly. Most modeling principles carry over. What changes is who consumes the spec and how the call gets made.
For a clearer line between OpenAPI and adjacent terms like Swagger, see OpenAPI vs Swagger. That distinction also helps separate MCP cleanly from existing API specs.
The MCP architecture in four concepts
MCP is best understood through four core concepts. Together they describe how agents and servers interact and what kinds of information an MCP server can expose.
| Concept | What it actually means |
|---|---|
| Server | A component that exposes tools, resources, and prompts. An MCP server can run as a standalone process, sit as a sidecar next to an existing backend, or live inside a platform feature. |
| Tools | Callable functions with input and output schemas. A tool is the MCP equivalent of an API endpoint and describes an action the agent can perform. |
| Resources | Readable data sources that an agent can fetch. Resources are passive, unlike tools. A resource might be a document, a dataset, or a cached API response. |
| Prompts | Pre-built conversation templates that a server offers to an agent. Prompts structure more complex workflows where an agent needs to coordinate several tools. |
Communication between agent and server runs over JSON-RPC. Locally, that usually means a stdio transport. For remote servers, HTTP/SSE or similar transports take over. Authentication wasn’t fully specified in the early MCP versions, but the current OAuth-2.0-based profile has cleared most of that up.
{
"name": "get_customer_orders",
"description": "Liefert die letzten Bestellungen eines Kunden",
"inputSchema": {
"type": "object",
"properties": {
"customerId": { "type": "string", "format": "uuid" },
"limit": { "type": "integer", "default": 10 }
},
"required": ["customerId"]
}
}The tool schema deliberately reads like a slice of an OpenAPI spec. Conceptually, that’s exactly what it is. It describes inputs, expected parameters, and a callable function. The real difference isn’t the modeling. It’s who reads the description and how the call gets made. With OpenAPI, that’s usually developers or HTTP clients. With MCP, it’s AI agents.
Connection setup follows a clear sequence too. In the initialize step, client and server exchange capabilities. That’s the moment the agent learns whether the server offers tools, resources, or prompts, and which functions are available. From there, a session stays open, and the client can invoke tools, fetch resources, or load prompts. Notifications let the server tell the client when tool lists change or new resources show up.
MCP and OpenAPI side by side
MCP and OpenAPI don’t solve the same problem, but they overlap at one important point. Both describe functions a consumer can invoke. OpenAPI is aimed mostly at human developers, API consumers, and HTTP clients. MCP describes similar capabilities, but for AI agents.
In practice the two complement each other. An API that’s already cleanly described in OpenAPI can usually be exposed over MCP as well, without major rework. Endpoints, schemas, and auth mechanisms carry over. What’s mostly new is the extra spec layer that tells an AI agent how to discover and use those functions sensibly.
If you’re planning an OpenAPI-to-MCP migration, the technical details live in the upcoming article on that topic. The core idea is simple. A well-maintained OpenAPI spec is the best starting point for an MCP exposure, because schemas, authentication, and operations are already documented in machine-readable form.
The most important difference shows up where OpenAPI deliberately stops. MCP tools often need extra information that only matters to an agent. When should a tool be used? What’s a sensible call order? What data should the agent already have before invoking it? This kind of meta-information is what makes MCP tools usable for AI agents, and it goes beyond what OpenAPI as a format typically captures.
Who supports MCP today
MCP adoption accelerated sharply in the first months of 2025. In practice, four groups of adopters are visible today.
AI agents as consumers. Claude has supported MCP natively since December 2024. ChatGPT followed in spring 2025 with its own implementation. Cursor, Continue, and other AI coding assistants now use MCP as their default way to connect to external tools and data sources.
IDEs and developer tools. JetBrains, VS Code, and specialized editors integrate MCP as the interface between agent and development environment. That lets agents reach into code repositories, local files, or build systems. MCP, in effect, becomes the bridge between the coding workflow and the AI agent.
Platform vendors. Microsoft is folding MCP server components into Azure services. Google Cloud offers MCP-capable endpoints for select services. Anthropic also maintains a reference implementation that other vendors can align with.
Open-source community. In parallel, the number of available MCP servers for standard tools like GitHub, Slack, Notion, or local databases keeps growing. For enterprise teams, that matters: most integration work doesn’t have to start from scratch. It can build on existing servers and patterns.
Cloud-native vendors are catching up too. AWS shipped initial MCP endpoints for Bedrock services in spring 2025. Snowflake offers an MCP server for database access. API platforms like api-portal.io now generate MCP servers as a built-in product feature. With that, MCP has gone from a side project of individual teams to a standard piece of modern API platforms.
What MCP actually means for enterprise APIs
For enterprise API teams, MCP mostly comes down to one thing. It adds a second spec layer on top of existing APIs. Not every API function should automatically become available to agents. What matters is which functions get deliberately exposed, under what conditions, and how permissions, audit logging, and quotas get handled.
In enterprise contexts, three use cases come up especially often.
Internal developer productivity. Backend teams expose selected read endpoints over MCP so developers can pull API data straight from their AI coding assistant. Instead of jumping between editor, Postman, and curl, they ask the agent inside the workflow they already use.
Customer support automation. Support APIs get exposed over MCP for agent integrations. Those agents can classify tickets, prep routing decisions, or summarize case data. The decision stays with the human; the structured data work moves to the agent.
Internal knowledge workflows. Document APIs, wiki APIs, and database queries can be wired up to internal agents over MCP. Employees get answers to knowledge questions without having to search multiple systems themselves. The important detail here: permission logic stays in the underlying API. It doesn’t move into the MCP layer.
Practical tip. For a first MCP integration, pick a clearly bounded read-only use case. The team gets to see how tool invocations, permissions, and audit logging behave in practice without opening up writes on day one. Write tools come later, once you can monitor agent calls in your own setup transparently and reliably.
Observation from the field. In a pilot project, we put an MCP server in front of an existing internal reporting API. The initial effort for server setup and tool definitions came in at around two person-days. The payoff showed up the moment analysts started pulling ad-hoc reports straight from their coding assistant. What used to take several tool switches now happened inside a single editor window.
Where to be careful
MCP is still young. That’s exactly why a few things deserve a hard look before any production rollout.
The first is security. An MCP server exposes tools that agents invoke on their own. If a tool isn’t explicitly read-only, the agent effectively has write access to the underlying system. That’s why every production MCP setup needs a clean audit model, gap-free logging of every tool invocation, and clearly defined quotas.
The second is authentication. Early MCP versions left a lot of room for interpretation around authn and authz. Later spec revisions closed those gaps significantly. Anyone putting an MCP server in front of a production API should support the current OAuth-2.0 profile and avoid building on older implementations.
Warning. MCP servers without an authentication layer are a direct risk to the backend. Several incidents that went public in 2025 followed this exact pattern: a reachable MCP server exposed functions without sufficient access control. That’s why the OAuth-2.0 profile in the current spec isn’t an optional add-on. It’s a baseline requirement for any production exposure.
The third is tool design. A tool offered to an agent has to be described unambiguously. Vague descriptions, too many optional parameters, or purely technical names raise the odds that the agent picks the wrong tool or calls it with the wrong arguments. The care you’d already put into a good API spec matters even more in MCP.
A simple practical rule helps especially at the start. Write the description of an MCP tool from the agent’s perspective, not the backend’s. "Returns a customer’s most recent orders for support inquiries" is far more useful to an agent than "GET /customers/{id}/orders". It sounds obvious, but it’s one of the most common gotchas in early MCP setups, because backend teams are used to describing endpoints in mostly technical terms.
How api-portal.io supports MCP
In api-portal.io, you can generate MCP servers directly from existing OpenAPI specs. Tools, resources, and auth profiles get derived from the spec and can be tuned specifically for the agent use case. The existing API description stays the starting point, while the MCP layer adds the extra information AI agents need.
The platform handles audit logging, quota management, and the permission mapping between MCP calls and the underlying API. The MCP server makes existing APIs usable in MCP scenarios without forcing teams to rebuild or maintain a parallel REST surface.
In this setup, MCP isn’t a replacement for OpenAPI. It’s a complementary layer for AI agents. Existing APIs stay intact and get opened up selectively for agent workflows.
The MCP Server makes APIs usable in MCP scenarios.