Technology

Complete Guide to Model Context Protocol (MCP): Transforming AI Agent Development

Practical Implementation of the New Standard for LLM and Tool Integration

2025-04-07
47 min
MCP
LLM
AI Agents
Claude
AI Technology
Agent Development
Vibe Coding
Ryosuke Yoshizaki

Ryosuke Yoshizaki

CEO, Wadan Inc. / Founder of KIKAGAKU Inc.

Complete Guide to Model Context Protocol (MCP): Transforming AI Agent Development

What is MCP — A New Protocol Changing AI Tool Integration

In recent months, a technology has been rapidly gaining attention in the AI developer community. The Model Context Protocol (MCP), proposed by Anthropic, is being described as the "USB-C for LLMs". This standard has emerged as a standardized protocol for connecting large language models (LLMs) with external tools and data sources, and is changing the paradigm of AI agent development.

Though still in its early stages as a technical standard, its potential impact is immeasurable. This article comprehensively covers everything from MCP's basic concepts to its technical mechanisms, implementation methods, and future prospects. I hope it will help engineers involved in AI agent development, as well as those in positions to consider technical strategy, understand the possibilities and challenges this new protocol brings.

My interest in MCP stems from a perspective of bridging technology and business. I sense that this technology has the potential to go beyond mere development efficiency and more effectively realize the integration of AI systems with business data. It's an important development that should be evaluated from both management and engineering perspectives, particularly as a mechanism for safely and flexibly connecting enterprise data with AI.

Model Context Protocol (MCP) is an open protocol for connecting large language models (LLMs) with external data sources and tools in a standardized way. Released by Anthropic in November 2024, this technology is positioned as "a new open standard for standardizing how context is provided to AI applications"1.

The Connection Problem — Before and After MCP

To understand the value of MCP, we first need to look at the situation before its emergence. Why was such a standard necessary?

図表を生成中...

Before MCP, each LLM application needed to implement individual integration code to connect with external tools. This is a typical challenge known as the "M×N problem". To connect M LLM apps with N tools, M×N implementations were required.

Developers had to write unique integration code for each combination of LLM and tool, and the effort doubled whenever new tools or LLMs appeared. This not only significantly decreased development efficiency but also increased the burden during maintenance and updates.

MCP proposed a fundamentally new approach to solve this problem. What kind of solution does it offer?

図表を生成中...

MCP adopts a client-server architecture, where the LLM app side implements an "MCP client" and the tool side implements an "MCP server" to ensure interoperability. This reduces the necessary implementations to just "M+N", dramatically improving development efficiency.

Even when new LLMs or tools emerge, they can connect with the entire existing ecosystem simply by supporting MCP. This flexibility and scalability are the core values of MCP. I strongly sense its potential as a bridge connecting technology and business in this paradigm shift.

Background of MCP's Emergence

MCP was born from the following challenges facing the AI industry:

  1. Fragmentation of tool integration: Each LLM platform had its own integration method, increasing the burden on developers
  2. Complexity of data integration: Lack of a standard method for safely providing enterprise data to LLMs
  3. Need for expanding agent capabilities: Lack of flexible means to add new functionalities to existing AI assistants

To address these challenges, Anthropic released MCP as an open technology. Simultaneously, they released the MCP specification and SDKs (supporting various languages), local MCP server support for the Claude desktop app, and a collection of open-source MCP server implementations, laying the groundwork for rapid adoption1.

This strategy was highly effective. By not only proposing a technical standard but also simultaneously providing actual implementations and tools, they significantly lowered the adoption barrier for developers. This is an important lesson in "building bridges for technology adoption."

MCP Adoption by Major Companies and Community Development

Let's look at the history of MCP from its emergence to the present. We can see how rapidly it has evolved in a short period.

Major MCP Milestones

2024
November

Released by Anthropic

MCP specification, SDK, and open-source server implementations released simultaneously. Foundation laid as a technical standard

2025
February

Microsoft begins support

Integrated into VS Code/Copilot, accelerating adoption as a developer tool. Shows strengths in code understanding context

2025
Late March

OpenAI announces support

MCP support in Agents SDK establishes it as a de facto industry standard. Entry of major competitor enhances reliability

2025
April onwards

Industry standardization acceleration period

Adoption spreading among major IT companies and startups, rapidly expanding practical ecosystem

Being released as open source, MCP has seen explosive growth in the developer community. From late 2024 through 2025, numerous MCP server implementations have been published on GitHub, reportedly numbering in the "thousands"2.

For example, MCP servers for a wide range of tools and services have been implemented and shared by the community, from Google Drive, Slack, GitHub, databases (Postgres/SQLite), to browser automation (Puppeteer/Playwright) and various cloud APIs. The comprehensive official documentation and starter code available immediately after release also helped lower entry barriers for developers, supporting this rapid expansion.

These developments indicate that MCP is beginning to function not just as one company's technology, but as common infrastructure for the AI industry. It is truly serving as the "USB-C for LLMs," enhancing interoperability among diverse AI systems.

Observing this rapid adoption, I feel that "timing" and "addressing existing pain points" are critical for technical standard acceptance. MCP emerged as a clear solution to problems developers faced daily, and because it combined ease of implementation with immediate effectiveness, it gained such widespread support in a short time.

Why MCP is Gaining Attention — Comparison with APIs and Strategic Value

Behind MCP's rapid adoption is clear superiority compared to traditional API integration methods. Let's delve into its strategic value.

Efficiency Improvement Through Unified Standards

Compared to conventional methods of connecting AI with external tools, MCP's greatest advantage is integration standardization. This brings important value for several reasons:

  1. Development resource efficiency: Eliminates the need for individual implementations for each connection pattern, significantly reducing development costs
  2. Plug & play simplicity: New functionality can be added simply by "attaching" existing MCP servers
  3. Improved maintainability: Standardized interfaces make the entire system easier to maintain
図表を生成中...

From my experience, ease of initial adoption and immediate value realization are key to successful technology integration. MCP was strongly embraced by developers because it came with abundant ready-made servers (Google Drive, Slack, etc.) from the start, offering the convenience of simply implementing an MCP server when wanting to "add ◯◯ functionality to my AI app."

Flexible Tool Addition to LLM Agents

One interesting application of MCP is the ability to give new tools to existing LLM agents that users cannot control. Harrison, a LangChain developer, notes that "when you want to increase the tools of an agent you can't control yourself (e.g., Claude Desktop or Cursor), you need some kind of protocol, and MCP provides exactly that"3.

This point is particularly important. Traditionally, adding external functionality to closed AI assistants was difficult, but with MCP support, users can start an MCP server and pass tool definitions to give the agent new capabilities. This means even users without specialized knowledge can extend agent functionality, dramatically increasing AI adaptability.

図表を生成中...

Why is this approach groundbreaking?

Traditional AI assistants could only use functions implemented by their providers. If you wanted new functionality, you had to request implementation from the provider or create a separate dedicated application yourself. With MCP-enabled assistants, when users think "I want it to be able to use this tool too," they can extend functionality simply by adding the corresponding MCP server. This fundamentally changes the customizability and extensibility of AI assistants.

From a business perspective, this is a good example of value creation through "platform openization." Rather than the product provider preparing all functions, allowing extension by the ecosystem enables providing users with a richer experience. Apple's App Store, WordPress plugins, and now MCP all share a common pattern among successful platforms.

Utilization with Fixed Plans and Value Maximization

Particularly in Japan, the combination of fixed-price LLM services + MCP for "increased freedom" is attracting attention. For example, with Claude's monthly fixed-price Pro plan, while it doesn't inherently have functions like browser operation or external database reference, connecting MCP-compatible external tools makes it possible to perform various operations without additional cost.

One article states that they were able to recreate a demo by setting up a browser operation MCP server with the Claude desktop app (paid plan), noting that setting up the MCP server itself is "very simple once you understand it," and that the author tested and confirmed it working4.

This ability to "extend LLM capabilities within a fixed price" has a lower psychological barrier compared to cases where API calls incur charges, promoting experiments to freely "run" agents. Being able to execute long-term processing or multi-step operations without worrying about additional fees makes MCP an attractive option for users aiming to maximize utilization within fixed costs.

It's an interesting phenomenon from a business model perspective as well. The combination of a subscription model and open extensibility gives users the experience of "extracting more value from the product they purchased." This creates a virtuous cycle of increasing user satisfaction while enhancing the appeal of fixed-price plans.

Security and Data Sovereignty Considerations

An important consideration for enterprises is that MCP allows you to set up and manage your own server on the data source side, making it easier to address security and sovereignty concerns when handling confidential data5. Traditionally, to pass confidential data to an LLM, you needed to upload it to the cloud or include it directly in prompts, but with MCP, data owners can set up MCP servers within their organization and let LLMs access necessary data through standardized methods.

図表を生成中...

This ability to balance "improved usability" with "security" is another reason MCP is valued, particularly for enterprise use.

Concrete Enterprise Implementation Cases

There are also increasing examples of enterprise implementations demonstrating MCP's value. From the beginning, several companies and projects announced early adoption of MCP, including:

  • Block (formerly Square): Standardized data access through MCP integration with internal systems, enabling consistent data utilization across multiple AI applications
  • Apollo: Providing GraphQL API as an MCP server, improving developer experience through seamless integration with LLMs
  • Zed (editor): Enhancing code base reference functionality via MCP, significantly improving programmer code understanding and productivity
  • Replit (cloud IDE): Revamping the connection between development environment and AI with MCP, strengthening the coding experience

These cases suggest that MCP is particularly useful in enterprise domains (connecting multiple data systems with AI) and development support domains (retrieving relevant information for code understanding).

Personally, I see the most potential in connecting enterprise legacy systems with LLMs. Many companies have accumulated business systems and databases over many years, and replacing them immediately isn't realistic. MCP functions as a "bridge between legacy and AI," enabling gradual AI adoption. This is also an important technology supporting "transition strategies" in business transformation.

MCP's Technical Mechanism — Architecture and Basic Concepts

To deepen our technical understanding of MCP, let's examine its internal structure in detail.

Basic Client-Server Structure

MCP is based on a client-server architecture. How is this architecture structured?

図表を生成中...

First, there's an MCP client within the application hosting the LLM (e.g., Claude Desktop). This client acts as an "interpreter" between the LLM and external MCP servers.

Next, there are various MCP servers operating as external processes. Each server provides access to specific tools or data sources (e.g., browsers, databases).

Communication between MCP clients and servers uses JSON-RPC 2.0, with message-driven bidirectional requests, responses, and notifications6.

Multiple transport layers are provided for communication, selectable based on use case:

  • Stdio transport using standard input/output (stdin/stdout) for local environments
  • SSE transport using HTTP + Server-Sent Events for network communication

Through this setup, the LLM model communicates with various servers via the client to retrieve necessary information or perform operations.

It's important to note that MCP is robustly designed. JSON-RPC is an established communication standard that enables bidirectional communication and simple state management. Having multiple transport options allows for a wide range of use cases from local to remote environments. This technical robustness is another reason for MCP's rapid adoption.

The Three Elements: Resources, Tools, and Prompts

MCP servers can provide three types of offerings to LLMs7. These form the core functionality of MCP.

1. What are Resources?

Resources primarily refer to read-only data, such as:

  • Files and documents
  • Database records
  • API response data
  • System logs
  • Media like images and audio

This data is information you want to provide to the LLM as context. For example, used in requests like "Please read and summarize this document."

Resources are uniquely identified by URIs (e.g., file:///path/to/file.txt or postgres://...) and are loaded into the LLM through client-side operations or user selection. Resources are essentially static and referential.

Why is the concept of resources important? It forms the foundation for supporting LLM context understanding. Being able to reference complex business data or specific knowledge bases allows the LLM to generate more accurate and relevant responses within limited token counts. This mechanism is essential, particularly for integration with enterprise documents and databases.

2. What are Tools?

Tools refer to operations that can be called by the LLM, equivalent to function execution.

図表を生成中...

Tools are implemented on MCP servers as code or external service operations and are requested by the LLM as "function calls". Examples include:

  • Executing web searches
  • Querying databases
  • Creating or editing files
  • Calling APIs
  • Performing calculations

Tools are model-controlled, meaning the LLM itself automatically decides when to call them as needed. Each tool has a defined name, description, and input parameter schema. The LLM uses the description as a clue to determine which tool to use and passes the necessary parameters in JSON format.

This is an extremely powerful concept. By giving LLMs the ability to act, they evolve from mere "text generators" to "assistants that actually accomplish tasks." Through tool abstraction, LLMs can operate various external systems, ultimately producing more concrete results for our instructions.

3. What are Prompts?

Prompts are prompt templates for LLMs. They define prompt formats or procedures on the server side for certain types of queries, allowing users or clients to apply them to LLMs.

Examples include:

  • Templates for "analyzing code and outputting improvement points"
  • Templates for "summarizing documents"
  • Templates for "analyzing data"

Prompts are user-controlled, intended for users to explicitly apply to LLMs by selecting from a GUI menu or executing specific commands. Unlike tools or resources, prompts don't provide functionality but serve as a mechanism for standardizing and reusing instruction content for LLMs.

This element is particularly important for business use. For analyses or processes that need to be repeated in the same format, having standardized prompts means users don't need to write detailed instructions each time and can obtain results of consistent quality. This facilitates the integration of LLMs into business processes and lowers the threshold for adoption.

Communication Protocol and Execution Flow

MCP communication is based on JSON-RPC 2.0 and designed to be extensible while maintaining version compatibility. The exchange between clients and servers follows this process:

図表を生成中...
  1. Initialization Handshake: Mutual notification of versions and supported features upon connection
  2. Feature List Retrieval: Client obtains lists of available features (resources/tools/prompts) from the server
  3. Feature Utilization: Specific features are used based on LLM judgment or user selection
  4. Result Processing: Client conveys server responses to the LLM and reflects results

Notably, MCP maintains flexibility while seeking standardization. Depending on implementation, servers can provide a wide range of functionality from simple file sharing to complex multi-stage processing via AI. Additionally, typing via JSON schema makes it easier for LLMs to understand available functions.

Thanks to this well-designed communication flow, MCP can function as a "common language" between various systems without depending on specific LLMs or tools. It has a solid design as a foundation for ensuring system interoperability.

How to Implement an MCP Server — Introduction Procedure Using Playwright as an Example

Let's explain the actual process of implementing MCP, using Microsoft's Playwright (browser automation) MCP server as an example. Playwright is a framework for operating web browsers, and by setting it up as an MCP server, you can give LLMs web browsing capabilities.

図表を生成中...

1. Installing the MCP Server

First, install the Playwright MCP server, available via npm (Node.js package manager).

# Global installation of Playwright server
npm install -g @playwright/mcp@latest

This command installs the Playwright MCP server on your system. In actual development environments, project-specific installation is often recommended over global installation, but this method is suitable for a quick trial.

After installation, the playwright-mcp command becomes available, completing the first step.

2. Configuring the Host Application

Next, configure the LLM host (client) to launch and use this server. For example, in the Claude Desktop app, add the following to the configuration file (settings.json etc.)8:

{
  "mcpServers": {
    "playwright": {
      "command": "npx",
      "args": ["@playwright/mcp@latest", "--headless"]
    }
  }
}

What does this configuration mean?

  • "mcpServers" is the list of MCP servers that Claude Desktop will automatically launch
  • "playwright" is the name (identifier) given to this server
  • "command": "npx" is the command used to launch the server
  • "args" is the list of arguments passed to that command
  • The "--headless" option is a setting to run the browser in the background

This configures Claude Desktop to automatically launch an MCP server named "playwright" internally.

3. Restarting the App and Verification

After saving the configuration, restart the Claude Desktop app. Upon restart, the Playwright MCP server will automatically launch and become accessible from Claude.

To verify that the configuration is correct, try asking Claude the following:

Please tell me the list of MCP servers connected to this server

If correctly configured, Claude should tell you that an MCP server named "playwright" is connected. This verification step is important and allows you to identify any issues early.

4. Using the Functionality

With the setup complete, let's try using the Playwright MCP server's functionality. For example:

"Please open Anthropic's website and tell me the titles of the latest blog posts"

With such instructions, Claude will automatically use Playwright to launch a browser, access Anthropic's website, and retrieve the information. It will then generate a response based on the retrieved information.

When executed, Claude will first explain something like "I'll check Anthropic's website using a browser" before starting to retrieve information. After a few seconds, you should receive the titles of the latest blog posts and a brief explanation of their content.

This way, users can utilize LLM web browsing capabilities through natural language instructions without writing complex code.

Usage Notes

There are several important considerations when implementing and using MCP servers.

Additionally, MCP servers (especially those performing automatic browser operations like Playwright) have the following operational considerations:

  1. Resource requirements: Sufficient memory and CPU resources are needed to launch browsers
  2. Network access: Appropriate network permissions are required for automatic web access
  3. Ensuring reliability: Since automatic browser operations potentially carry risks, it's advisable to limit access to trusted sites

In actual development environments, it's important to establish appropriate limitations and monitoring considering these points.

Playwright is just one example, and numerous other MCP servers are available. By selecting appropriate MCP servers for your purpose, such as Google Drive integration, database access, or file system operations, you can greatly expand LLM capabilities.

So what becomes possible when combining these diverse MCP servers? Here are some potential scenarios:

  • Collecting information from the web and storing it in a database
  • Searching internal documents and generating responses based on their content
  • Creating graphs and charts and preparing presentations

MCP's flexibility enables powerful workflows combining these complex tasks. It truly is a reliable foundation technology for building AI solutions that bridge technology and business.

MCP's Future Prospects — Industry Standardization Possibilities and Challenges

MCP is called the "USB-C for LLMs", but how realistic is this future vision? Let's examine the possibilities for standardization, current challenges, and long-term outlook.

Signs of Progressing Standardization

The strongest indication that MCP might become the de facto standard is the support status among major AI providers. As mentioned earlier, Anthropic, OpenAI, and even Microsoft have successively announced MCP support, indicating it's being effectively recognized as an industry standard.

OpenAI's SDK documentation explicitly mentions MCP, officially stating that it will support "using a wide range of MCP servers as agent tools"9. This shows companies aligning with MCP rather than pursuing independent paths, reducing concerns about standard fragmentation — a significant tailwind for its future prospects.

Furthermore, MCP's roadmap includes further extensions such as enhanced remote server support4, indicating it will increase in maturity as a technical standard. These moves by major providers and continuous technical improvements enhance MCP's long-term viability.

図表を生成中...

Current Limitations and Challenges

On the other hand, MCP faces several challenges.

Incomplete Full-Stack Standardization

MCP is merely a protocol between agents and servers, and currently most MCP servers operate as proxies calling existing service APIs2. To truly be "like USB-C with the same connector on both ends," tool providers (SaaS and API providers) would need to implement MCP directly. However, at this point, there are virtually no examples of original services having native MCP endpoints.

For the time being, the current approach of volunteers and third-party companies implementing MCP servers on behalf of services is expected to continue, and true standardization will take more time.

Why is this issue important?

Consider today's USB-C. The convenience of connecting with a single cable exists because both device side (smartphones, etc.) and host side (PCs, etc.) support USB-C. However, with MCP, only the LLM side (host side) currently supports MCP, while many service sides (device sides) don't support it directly. Instead, MCP servers acting as "adapters" are still needed.

Abstraction Limitations of Tool-Specific Functions

It's also noted that MCP might only handle the lowest common denominator of functionality across tools as a trade-off for abstraction2. For example, trying to recreate all the functions of an advanced project management software through MCP tools would either require a vast number of commands or risk oversimplification that loses the original convenience.

This "abstraction barrier" is a challenge MCP needs to overcome as it expands its coverage. However, this problem is similar to how "standard libraries in programming languages leave advanced operations to individual libraries," so MCP could take the stance of providing only core common parts and leaving special functions to extensions as needed.

Balancing abstraction and specificity has long been an important challenge in technical design. Too much abstraction increases usability but reduces functionality, while too much specificity provides rich functionality but increases complexity. How MCP balances this will be key to its future development.

Operational Overhead

While MCP integration is flexible, the operational cost of "making everything an external server" cannot be ignored2. For example, building an agent that uses 10 different external tools requires managing and keeping 10 MCP server processes running.

This might be somewhat excessive for small-scale use cases, and there may be instances where directly incorporating tool integration into code is easier when only a few tools need to be connected. MCP's architecture is particularly valuable for scalable, large-scale systems, and isn't expected to replace all integration methods, including simpler use cases.

Ensuring Reliability and Quality

While numerous open-source MCP server implementations have emerged, their quality and safety vary widely2. Since anyone can claim an implementation, some might only implement part of the functionality or have insufficient error handling.

Stable agent operation requires MCP server layers to function reliably, but currently, which implementation to choose is at one's own risk. For this issue, it's expected that useful implementations within the community will naturally be refined through factors like star ratings and reputation.

Long-Term Outlook and Expected Developments

In the long term, MCP is expected to develop in the following ways:

  1. Managed Service Transformation: Major cloud providers offering managed MCP-compatible servers to reduce operational burden
  2. Direct Support from SaaS Companies: Major SaaS companies beginning to provide official MCP server endpoints
  3. Multi-Agent Collaboration: Formation of an ecosystem where different AIs share the same MCP server groups and work cooperatively
  4. Establishment of Standards Organizations: Formation of official consortiums to manage and improve MCP specifications

Overall, MCP's future vision has both bright aspects and challenges. Currently, it's in a stage where its various advantages are being evaluated and rapidly gaining support, and in the short term, it's likely to establish itself as a de facto standard. However, challenges like involving service providers and ensuring comprehensive reliability remain for it to become a truly universal standard.

So should you implement MCP now? Consider the following points to make your decision:

  • Do you need to connect multiple LLMs with multiple tools?
  • Is there a possibility of expansion in the future?
  • Can you accept the overhead?
  • Can you pay the cost of transitioning to the standard now?

From the perspective of bridging technology and business, MCP is becoming an important option in corporate AI adoption strategies. It presents a clear solution to the challenge of integrating existing systems with AI. Particularly for companies considering gradual AI adoption, MCP is worth considering as a foundation technology combining flexibility and scalability.

Best Practices for Creating Your Own MCP Server — Developer's Guide

Finally, let's introduce best practices for developing your own MCP server. We'll focus particularly on implementation using TypeScript, showing development frameworks and code examples.

Overview of MCP Server Development

First, let's look at the flow of MCP server development. The basic steps are as follows:

図表を生成中...

Basic Framework for TypeScript Development

For MCP server development, you can use the official SDK provided by Anthropic. The TypeScript version in particular was one of the earliest to be well-organized, with rich classes and utilities necessary for MCP server implementation10.

First, set up the project and install the necessary packages:

# Create and initialize project directory
mkdir my-mcp-server
cd my-mcp-server
npm init -y
 
# Install necessary packages
npm install @modelcontextprotocol/sdk

This command installs the SDK needed for MCP server development into your project. Next, let's look at the simplest example of an MCP server:

import { Server } from "@modelcontextprotocol/sdk/server";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio";
import { ListResourcesRequestSchema } from "@modelcontextprotocol/sdk";
 
// Configure server information (name, version, etc.) and enable Resources functionality
const server = new Server(
  { name: "example-server", version: "1.0.0" },
  { capabilities: { resources: {} } }
);
 
// Register handler for "resource list" requests
server.setRequestHandler(ListResourcesRequestSchema, async () => {
  return {
    resources: [{ uri: "example://resource", name: "Example Resource" }],
  };
});
 
// Wait for client connections with Stdio transport
await server.connect(new StdioServerTransport());

What is this code doing?

  1. First, it imports the necessary modules
  2. Next, it creates a server instance, specifying the name, version, and provided functionality (resources in this case)
  3. It sets up a handler to respond to resource list requests
  4. Finally, it starts the server using standard input/output transport

This alone provides a minimal MCP server. In practice, you would add handlers for various requests to provide more functionality.

When this code is executed, the server starts and waits for connections from MCP clients. When a client requests a resource list, the configured handler is called and returns the defined resource information.

Example of a Practical Tool-Providing Server

Now let's look at a more practical example, implementing an MCP server that provides web search functionality. Note, however, that many MCP implementations for web search already exist, so this is a "reinvention of the wheel" for practice purposes.

Step 1: Basic Server Configuration

First, set up the basic server configuration:

import { Server } from "@modelcontextprotocol/sdk/server";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio";
import {
  ToolsCallRequestSchema,
  ToolsListRequestSchema,
} from "@modelcontextprotocol/sdk";
import fetch from "node-fetch";
 
// Server configuration
const server = new Server(
  { name: "web-search-server", version: "1.0.0" },
  { capabilities: { tools: {} } }
);

This section specifies the server's name, version, and provided functionality (tools). By setting tools: {} in the capabilities object, it declares that this server provides tool functionality.

Step 2: Implementing a Handler to Provide Tool List

Next, implement a handler that responds when a client requests a tool list:

// Handler returning tool list
server.setRequestHandler(ToolsListRequestSchema, async () => {
  return {
    tools: [
      {
        name: "web_search",
        description:
          "Accepts a search query and returns relevant search results",
        parameters: {
          type: "object",
          properties: {
            query: {
              type: "string",
              description: "Query string to search for",
            },
            num_results: {
              type: "integer",
              description: "Number of results to retrieve (maximum 10)",
              default: 5,
            },
          },
          required: ["query"],
        },
      },
    ],
  };
});

What is this handler doing?

  • When an LLM or client requests a list of available tools, this handler is called
  • Here, it defines one tool named "web_search"
  • It specifies the tool's description and required parameters (search query and number of results)
  • It defines the type, description, and whether each parameter is required using JSON schema format

This allows the LLM to understand how to use this tool and call it with appropriate parameters at the right time. The LLM reads the tool's description and understands it should be used "when web search is needed."

Step 3: Implementing the Tool Execution Handler

Next, implement the processing for when the tool is actually called:

// Handler for tool calls
server.setRequestHandler(ToolsCallRequestSchema, async (request) => {
  const { name, parameters } = request;
 
  if (name === "web_search") {
    try {
      // Parameter validation (always validate input)
      const { query, num_results = 5 } = parameters;
      if (!query || typeof query !== "string") {
        throw new Error("Invalid search query");
      }
 
      const safeNumResults = Math.min(Math.max(1, num_results), 10);
 
      // Actual search API call (example: fictional API)
      const response = await fetch(
        `https://api.search.example.com/search?q=${encodeURIComponent(
          query
        )}&results=${safeNumResults}`,
        { headers: { Authorization: "Bearer YOUR_API_KEY" } }
      );
 
      if (!response.ok) {
        throw new Error(`Search API error: ${response.statusText}`);
      }
 
      const data = await response.json();
 
      // Return results
      return {
        result: {
          search_results: data.results.map((item) => ({
            title: item.title,
            snippet: item.snippet,
            url: item.url,
          })),
          total_found: data.total_found,
        },
      };
    } catch (error) {
      // Error handling
      console.error(`Search processing error: ${error.message}`);
      return {
        error: {
          code: -32000,
          message: `Search processing failed: ${error.message}`,
        },
      };
    }
  } else {
    // Unknown tool name
    return {
      error: {
        code: -32601,
        message: `Unsupported tool name: ${name}`,
      },
    };
  }
});

Let's examine this handler's processing in detail:

  1. First, it extracts the tool name and parameters from the request
  2. It confirms the tool name is "web_search"
  3. It performs parameter validation, detecting invalid values
  4. It normalizes parameters to appropriate ranges (restricting result count to 1-10)
  5. It calls the actual search API (using a fictional API here)
  6. It processes the response from the API and extracts necessary information
  7. It formats and returns the results
  8. It handles errors appropriately if they occur

By carefully performing input validation and error handling, you can implement a robust MCP server.

Step 4: Starting the Server

Finally, add code to start the server:

// Server startup
await server.connect(new StdioServerTransport());
console.log("Web search MCP server has started");

This completes the implementation of a simple MCP server providing web search.

When this code is executed, the terminal displays the message "Web search MCP server has started" and waits for connections from clients. When an MCP client like Claude Desktop connects to this server, web search functionality becomes available.

Coding Assistance Using LLMs

MCP server implementation can sometimes benefit from the help of LLMs themselves. Anthropic notes that "the Claude 3.5 (Sonnet) model excels at automatically generating MCP server implementations, allowing organizations and individuals to quickly connect their own datasets"1.

In fact, there are reports of cases where giving prompts like "implement an MCP server using this API" to ChatGPT or Claude can generate a significant portion automatically. Since MCP itself is designed with LLM compatibility in mind, it has the unique advantage that development itself can be easily collaborated on with AI assistants.

This approach is particularly effective when development resources are limited or when you want to quickly create a prototype. By telling an LLM "I want to implement an MCP server using this API" and providing API documentation or specifications, you can have basic implementation generated. Of course, the generated code must be reviewed and tested, but it's very helpful as a starting point for development.

Conclusion

Model Context Protocol (MCP) is a technology that's fundamentally changing the approach to AI agent development. This protocol, also called the "USB-C for LLMs," standardizes the integration of LLMs with external tools and data sources, improving development efficiency and expanding AI functionality.

Led by Anthropic and joined by major players like OpenAI and Microsoft, MCP is establishing itself as an industry standard and holds potential to become the new infrastructure for AI agent technology. Its basic design is simple yet extensible, enabling connection with various tools.

Meanwhile, challenges remain for true standardization, such as involving service providers and dealing with abstraction limitations. However, these are issues expected to be resolved through time and community maturation.

Investment in MCP technology is likely to bring significant long-term returns, particularly in AI utilization in enterprise domains and improving developer experiences. Technical leaders and developers would do well to consider MCP as a viable option for AI tool integration.

Personally, in my quest to find bridges between technology and business, I see great potential in open, standardized approaches like MCP. Integration with existing systems is essential for companies to fully leverage AI power, and MCP will likely serve as that crucial bridge. I'll continue to watch MCP's evolution and application cases with interest.

References

Footnotes

  1. Anthropic official announcement "Introducing the Model Context Protocol" (November 2024) 2 3

  2. Sanjeev Mohan's blog "To MCP or Not to MCP Part 1: A Critical Analysis" 2 3 4 5

  3. LangChain blog "MCP: Flash in the Pan or Future Standard?"

  4. Zenn article "Experience the Power of MCP, the Talk of the AI Agent Community!" (2025) 2

  5. Model Context Protocol official documentation "Introduction"

  6. Model Context Protocol official documentation "Core architecture"

  7. Model Context Protocol official documentation "Resources", "Tools", "Prompts"

  8. GitHub "playwright-mcp" repository documentation

  9. OpenAI official documentation "Model context protocol (MCP) - OpenAI Agents SDK"

  10. Model Context Protocol official documentation "Server SDK"

Ryosuke Yoshizaki

Ryosuke Yoshizaki

CEO, Wadan Inc. / Founder of KIKAGAKU Inc.

I am working on structural transformation of organizational communication with the mission of 'fostering knowledge circulation and driving autonomous value creation.' By utilizing AI technology and social network analysis, I aim to create organizations where creative value is sustainably generated through liberating tacit knowledge and fostering deep dialogue.

Get the latest insights

Subscribe to our regular newsletter for the latest articles and unique insights on the intersection of technology and business.