Back to blog
AIIndustry insights

APIs, MCPs, or Both? Choosing the Right AI Integration Stack

Trying to decide between APIs, MCPs, or both? This article breaks down how each option works, where they fall short, and how they can work together. It covers real integration challenges, the role of unified APIs, and what to consider when building AI-driven systems that are reliable and scalable.

Saurabh Rai

Saurabh Rai

12 min read
APIs, MCPs, or Both? Choosing the Right AI Integration Stack

Last week (mid-June 2025, for those reading in the future), I saw this post making the rounds on LinkedIn. With nearly 1,000 reactions and 350 comments, people had decidedly mixed reactions. Some preferred MCPs as the future of LLM communication, while others were still focused on APIs. After all, APIs have been around for a long time, and most applications support them already, making APIs easier for development and communication.

API or MCP How to Make the Right Choice for your LLM-Stack in text img

If you're in this position, confused about whether MCPs will replace APIs or how they fit together, then you're not alone. Many others are grappling with the same questions. In this post, I'll help clarify how these technologies actually complement each other rather than compete.

Let's talk about APIs first, since nearly every application has them and they're the common and straightforward way to develop and fetch data from apps and servers.

The API Foundation

APIs (Application Programming Interfaces) extend existing applications' internals for other developers to fetch and process data. Servers use APIs to communicate between multiple instances of front-ends and backends. The whole process has been around for two decades and has stood the test of time. So, why not use APIs to get started with LLM-based communications?

You can wrap API calls as a function or tool calls, and then extend any LLMs functionality that supports function/tool calling easily. This way, the LLM or AI Agent can easily make calls to get data or post-processed data back. You already have the APIs, and function calling exists to make sure LLMs can do this. The foundation is solid, but the interface layer is where things get complex.

The Reality: It's Not That Simple

Unfortunately, it's not that straightforward. For APIs to be wrapped in function calls, they need to be well documented, and there should be focused endpoints for each kind of operation. But tool calling for complex functions doesn't work well in all scenarios and has a chance of calling the wrong function, which can hit the wrong API endpoint. Instead of a clean 200 response, you'll get a frustrating 401 error.

Challenge 1: Multiple Data Models

The data models in these APIs differ for every application. If you're working with 4-5 different apps or servers, then you need to make sure that the functions and documentation you're writing as prompts for the LLM contain information about all the possible data models available. This creates extra overhead where you have to catalog all the endpoints you're sharing with your LLM and then pass them as prompts when required. Then you just hope that it makes the right tool call.

Challenge 2: The Moving Target Problem

APIs keep changing, which makes them more difficult to work with for LLMs. Here's the tedious process: You have to first change the data models and URLs, then update and test your functions. Then update your prompts and documentation and test again with LLMs. Finally, update when required and check if everything is working correctly. Once again, you're left hoping that with multiple applications, the LLM or AI Agent makes the right call and everything works.

The Cost of Hope in Production

Hope is a good thing. However, in high-stakes production environments, especially when using costly AI models that can easily burn $500 worth of credits in an hour (Claude-4 Opus, o3-Pro, etc.), you need to make sure that LLMs perform their best on the first try. Mistakes directly increase your bills through both server costs and LLM token costs. Nobody wants that.

Enter MCPs: The AI Agent Interface Layer

That's why MCPs were introduced as a protocol for LLM and agentic communication and operations. Rather than replacing APIs, MCPs create a standardized interface layer that sits between AI agents and the underlying API infrastructure. MCPs are new and still in the infant stage. The security spec was recently introduced two weeks ago (June 18th, 2025). While they're gaining widespread adoption by companies like Cloudflare, Neon, Vercel, and AWS, MCPs need to undergo many changes and be more robust when it comes to permission-based and secure access. But we're getting there week by week.

The key insight is that MCPs rely on well-designed APIs underneath. When companies implement MCPs successfully, they're typically building on top of robust API foundations that provide the consistent schemas and reliable data access that make MCPs work effectively.

The Unified API Advantage: Best of Both Worlds

However, there's an optimal way to bridge APIs and MCPs: through unified APIs. These platforms solve the complexity problem while providing the foundation that makes MCPs more effective.

For example, instead of building separate MCP connectors for Zendesk, Salesforce, and Intercom (each with different data models for tickets, customers, and conversations), a unified API provides a single /tickets endpoint. Your AI agent learns one schema but can pull data from all three platforms. When you later add an MCP layer, it operates on this consistent foundation rather than juggling three different APIs.

Another example would be, rather than teaching your LLM about Shopify's product variants, WooCommerce's attributes, and Magento's configurable products, a unified API normalizes all three into a standard product model. Your AI agent can analyze inventory, generate reports, and make recommendations using one consistent interface, whether the data comes from Shopify or Magento.

When you're using multiple applications to fetch data from, process that data, and send it back, the LLM needs to handle processing and data modification as prompted. Unified APIs provide a single API endpoint that can extract data from different applications by just changing one or two parameters while keeping the rest of the syntax and data models consistent. You can connect any application to the API and only need to teach the LLM about a few endpoints. The rest can be controlled easily. These can be passed as simple functions for tool-calling, and LLMs can easily operate with them, creating more deterministic and predictable outcomes. This means fewer 400s and more 200s for all of us.

This unified approach creates the perfect foundation for MCPs when you're ready to implement them.

Choosing the Right Approach for Your Project

When You Already Have Well-Designed APIs

In cases where you have well-documented APIs and have been using them for a long time, your development team is already familiar with them and can handle prompting, changes in API structures, or shifting to a unified APIs without much trouble. Here, you can either continue with direct API integration or add an MCP layer on top for better AI agent interaction.

When You're Starting from Scratch

However, if you're at ground zero and have to build API structures, models, endpoints, etc., from scratch, then the best choice is to hop onto the MCP train and start extending your app features via the Model Context Protocol. (It's just JSON-RPC 2.0 under the hood, with stateful communication features.)

Why MCPs Build on Strong API Foundations

Here's the thing about implementing MCPs effectively: they work best when built on top of well-designed APIs. When you build APIs from scratch without considering AI integration, you're essentially creating custom integrations that your LLM will struggle to learn. You'll write functions, document endpoints, handle authentication, manage rate limits, and pray that your LLM makes the right calls without breaking anything.

MCPs create a better interface layer, but they still need that solid API foundation underneath. Instead of building custom APIs and then figuring out how to make your LLM work with them, you're building with AI agent integration as the primary interface goal from day one.

Major companies are proving that this model works by building MCPs on top of their existing robust API infrastructure:

Block has integrated MCPs directly into their payment systems, letting AI agents handle invoice generation and customer management through natural language - but this relies on their existing, battle-tested payment APIs underneath. Replit uses MCPs to let AI agents understand your entire codebase context, not just the single file you're working on - built on top of their comprehensive developer platform APIs. AWS has a full suite of MCPs that help LLMs interact with their offerings to build products faster - all leveraging the extensive AWS API ecosystem that powers their cloud platform.

These aren't POCs; they're deployed to production systems handling real business operations and are available for use by the public.

Over 1,000 open-source MCP connectors emerged by February 2025, and companies like Cloudflare, Neon, Vercel, and AWS are building MCP support directly into their platforms. When you implement MCPs today, you're building an interface layer on infrastructure that the entire AI ecosystem is standardizing around.

Plus, there's a practical development advantage. Building an MCP interface on top of existing APIs is significantly faster than building a full API infrastructure from scratch and then trying to make it AI-friendly. You get authentication, error handling, and protocol management built in. Your development team can focus on creating effective AI agent experiences instead of wrestling with REST endpoint design, OpenAPI documentation, and rate limiting implementations.

While APIs have been tested and have mature security measures, MCPs are relatively new, and so are the security features and implementation ideas. There are some common challenges with MCPs, and some of them are passed onto MCPs via the LLMs in the form of prompt injections.

MCP Security and Implementation Challenges

Now, before you get too excited about MCPs, let's talk about security. The biggest concern is what security researchers call the "keys to the kingdom" problem. When you connect an MCP server to multiple services like Gmail, Google Drive, Slack, GitHub, and your database, you're creating a single point of failure. If someone compromises your MCP server, they're not just getting access to one service. They're getting OAuth tokens for everything you've connected. That's your entire digital life or your company's critical systems in one vulnerable basket. There was a similar problem with GitHub’s MCP, where it provided full access to private repos just by authenticating and leaking access tokens.

Then there's the prompt injection. Remember how we talked about LLMs making wrong API calls? Well, with MCPs, the stakes are higher. Attackers can hide malicious instructions in documents, emails, or even Slack messages. When your AI agent processes that content through MCP, it might follow those hidden instructions instead of your actual requests. Imagine an AI agent that's supposed to summarize your emails but instead forwards them all to an external attacker because someone embedded invisible instructions in a message.

The security spec that was introduced in June 2025 helps, but it's still evolving. Many MCP implementations don't have proper authentication yet. Some developers are connecting MCPs to production systems without understanding that they're essentially giving AI agents root access to their infrastructure. The protocol recommends having humans in the loop for sensitive operations, and that’s on the development & security team to implement and maintain.

There's also the supply chain risk. With over 1,000 open-source MCP connectors available, and a malicious MCP server can disguise itself as a legitimate tool, get approved by unsuspecting users, and then quietly change its behavior after installation.

For production deployments, you need to treat MCP servers like you would any other critical infrastructure component. That means proper monitoring, audit logs, least privilege access, and regular security reviews. Many teams rush to implement MCPs without considering these requirements, which sets up a security disaster waiting to happen.

The irony is that MCPs solve the complexity problem of API integrations while introducing entirely new categories of security complexity. It's not that MCPs are inherently insecure, but they require a different mindset about AI security that many development teams haven't adopted yet.

Conclusion

So, where does this leave us in understanding how MCPs and APIs work together? The truth is, they're not competing technologies, and that's exactly why that LinkedIn post sparked 350 comments in the first place. MCPs don't replace APIs – they create a better interface layer for AI agents to interact with the solid API foundations you've already built.

If you're working with existing, well-documented APIs and have a development team that's comfortable with function calling and prompt engineering, you can enhance your setup by adding MCP interfaces on top. Unified APIs can bridge the complexity gap, and your LLMs can work more effectively with what you've already built.

If you're starting from scratch or building AI-first applications, consider building unified APIs first to establish a solid foundation, then implementing MCPs to create an optimal AI agent interface. The ecosystem momentum is real, the standardization benefits are significant, and you'll be building on the foundation that the entire AI industry is converging toward.

The key is being honest about your current situation and future goals. Don't choose MCPs thinking they'll replace your API infrastructure, and don't avoid them thinking they're just another protocol to manage. Consider your team's expertise, your security requirements, your existing infrastructure, and most importantly, how you can create the best interface for AI agents to interact with your systems.

Whether you choose to enhance your existing APIs with unified APIs or add MCP interfaces on top, make sure you understand that you're building layers that work together. The AI ecosystem will continue evolving rapidly, but the fundamental principle remains the same: choose the approach that lets your team build amazing AI applications on solid API foundations without getting lost in the integration complexity. That's what matters.

Ready to get started?

Scale your integration strategy and deliver the integrations your customers need in record time.

Ready to get started?
Trusted by
Nmbrs
Benefex
Principal Group
Invoice2go by BILL
Trengo
MessageMedia
Lever
Ponto | Isabel Group
Apideck Blog

Insights, guides, and updates from Apideck

Discover company news, API insights, and expert blog posts. Explore practical integration guides and tech articles to make the most of Apideck's platform.