The New Era of Agent-Based AI: Changing Forms of Collaboration
Two days before writing this article, I published How Model Context Protocol (MCP) Changes AI Agent Development. It's fascinating that Google announced a new standard for "AI agents to communicate with each other" around the same time. This timing is no coincidence—it's evidence that the industry as a whole recognizes that the next critical challenge in AI agent technology is "interoperability and collaboration."
On April 9, 2025, at Google Cloud Next '25, Google announced the Agent2Agent (A2A) Protocol1. This technology isn't simply adding new features; it has the potential to fundamentally change agent-based AI. Having operated an AI education business for many years and worked on talent development and organizational transformation, I've seen how technology changes organizations and people, and vice versa. I believe the A2A technology also carries influence beyond mere technical specifications.
What is A2A? In simple terms, it's an open protocol designed to allow AI agents to communicate with each other, safely exchange information, and coordinate actions across various platforms, regardless of their underlying frameworks or vendor origins2. It aims to enable communication and collaboration between AI agents built on different vendors and frameworks.
My strong interest in A2A stems from finding interesting parallels between AI evolution and organizational transformation. Throughout my work in reforming organizational communication, I've witnessed how "siloing" between departments hinders value creation. There are surprisingly many commonalities between communication challenges in human organizations and collaboration challenges among AI agents.
In this article, I'll not only explain the technical mechanisms of the A2A protocol but also explore its relationship with MCP and how these technologies will impact actual organizations and business processes. I'll also provide a critical perspective to help readers make informed judgments about the protocol's future potential.
Agent Collaboration Challenges and Solutions: Tackling Siloed Systems
As AI agents gain the ability to autonomously solve complex tasks, a new challenge emerges: fragmentation and siloing.
Fragmentation of Specialized Agents
As companies introduce AI agents specialized for specific business areas or tasks across various departments, platforms, and vendors (Google's ADK, LangGraph, Crew.ai, etc.), a situation arises where these agents cannot effectively communicate and collaborate with each other3. This is the formation of "silos" in the technological world, essentially the same challenge I've repeatedly faced in organizational transformation.
Each agent operates in its own "language" without a standard way to interact with other agents, making each an isolated island. This situation is remarkably similar to communication barriers between departments in organizations. When I founded Wadan Inc., our mission was to "redefine dialogue to achieve organizational transformation," and similarly, "dialogue" between AI agents also needs to be redefined.
JSON-RPC as a Common Language
The A2A protocol is designed to address the fragmentation problem in agent-to-agent communication. Importantly, A2A is built on established technical standards. It adopts HTTP, Server-Sent Events (SSE) for real-time update notifications, and JSON-RPC (JSON Remote Procedure Call) as a core element of the technical foundation for structuring requests and responses4. MCP follows a similar approach. These are all widely adopted existing web standard technologies. This is an extremely rational choice, as building on existing technologies allows developers to implement with familiar tech stacks, lowering adoption barriers.
What is JSON-RPC? It's a simple protocol for making remote function calls between different computer systems using JSON as the data format. Its request and response structures are clearly defined, making it easy to implement across various languages and platforms.
The Collaborative World A2A Aims For
The A2A protocol functions as an "interpreter" between agents, enabling automation of tasks spanning different systems, departments, or organizations.
This unified communication protocol offers the following advantages:
- Agent discovery and capability advertisement: Each agent publishes its capabilities as an "Agent Card"
- Standardized task delegation: One agent can request clearly defined tasks from another agent
- Structured exchange of artifacts: Data and results can be passed in standardized formats
- Cross-vendor interoperability: Enables collaboration between agents from different vendors
In the context of organizational transformation, this is equivalent to building smooth communication channels that bridge departmental silos. Moreover, the standardized protocol allows each agent to maintain its "expertise" while flexibly accessing the specialized knowledge or tools of other agents when needed.
The Relationship Between A2A and MCP: Two Complementary Protocols
In relation to my article on MCP from two days ago, the most important point is that A2A and MCP have a complementary rather than competitive relationship. Each is designed to solve different problems.
The Difference Between "Tool Operation" and "Conversation"
To understand the fundamental difference between MCP and A2A, consider the following diagram:
MCP is "vertical integration." It's a standard for AI agents to integrate with "non-AI systems" like tools and data sources. For example, it enables browser operation, database queries, and file manipulation.
On the other hand, A2A is "horizontal integration." It's a standard for different AI agents to share their capabilities and services with each other. For example, a customer support agent can forward specialized questions to a product agent, or a scheduling agent can delegate meeting coordination to other agents.
These are not mutually exclusive but rather complementary. According to Google's official information, A2A is positioned as "complementing" MCP4. Since each addresses different problems, combining both creates a more powerful ecosystem.
Concrete Example of Integration
Let's look at a concrete example of how A2A and MCP can be combined:
In this example, the user's personal agent uses A2A to delegate a task to a specialized research agent. The research agent uses MCP to operate external tools such as an internal database and a web browser to gather necessary information. Finally, it uses A2A again to return the results to the personal agent.
This demonstrates how MCP and A2A integrate at different layers to build a more comprehensive agent ecosystem. There is a clear division of roles: one handles agent-to-tool connections, while the other handles agent-to-agent connections.
The Value of Combining Both
Combining A2A and MCP enables advanced scenarios such as:
- Tasks requiring knowledge from multiple domains. For example, combining legal document analysis with financial data interpretation
- Multi-step complex workflows. Data collection → analysis → report creation → distribution divided among different agents
- Cross-organizational processes. Collaboration between specialized agents connected to different departmental systems and data
- Edge case handling. Escalating inquiries that the primary agent cannot handle to specialist agents
From my organizational communication research perspective, this resembles a technical solution to the classic challenge of distributing and integrating expertise within organizations. There's a similar structure in the collaboration of "people" with expertise, and the combination of A2A and MCP could be described as a "digital twin of the organization."
A2A Technical Architecture: How Agent Communication Works
To understand A2A's technical details, let's examine its architecture and key components.
Client-Server Model
A2A is fundamentally based on a client-server architecture. One agent functions as a "client" while another functions as a "server." This relationship is not fixed and roles can switch depending on the situation.
Capability Advertisement via Agent Card
One of A2A's important concepts is the "Agent Card." This is JSON metadata that each agent uses to advertise its capabilities and services5. An Agent Card includes information such as:
- Agent identifier
- Name and description
- Type (general assistant, domain specialist, etc.)
- Authentication scheme
- Available services and functions
This can be thought of as a digital business card that agents exchange when they first meet. Agent Cards allow agents to understand other agents' capabilities and choose appropriate delegation targets.
Task Negotiation and Execution Flow
The basic interaction flow for A2A is as follows:
- Discovery Phase: Client agent searches for remote agents with appropriate capabilities
- Negotiation Phase: Client and remote agent negotiate task details (parameters, expected output format, etc.)
- Execution Phase: Remote agent executes the task and reports progress
- Completion Phase: Remote agent provides results or artifacts to the client
What's particularly interesting is that A2A supports stateful task management. This means it can handle long-running tasks that might take hours or even days to complete. This is essential for automating practical business processes.
Artifact Exchange Mechanism
In A2A, "Artifacts" refer to data or files exchanged as the result of task execution4. These can include:
- Documents or reports
- Datasets or analysis results
- Images or charts
- Structured data (JSON, XML, etc.)
A2A defines standardized methods for exchanging these artifacts between agents and supports content type negotiation. This allows appropriate handling of artifacts in different formats and sizes.
Multimodal Support and Rich Interactions
A2A supports not just simple text-based dialogues but also diverse communication modalities including voice and video streaming4. This enables more natural, human-like interactions.
For example, a voice-enabled assistant agent that doesn't know the appropriate answer could forward the query to a specialist agent via A2A and play back the response vocally. Alternatively, an agent providing visual analysis could send data visualizations to another agent, which then displays them to the user.
Google's Agent Strategy and Ecosystem: A2A's Position
To understand A2A, it's important to grasp its position within Google's overall AI agent strategy. A2A is not a standalone technology but is designed as part of Google's comprehensive vision for AI agents.
Google's Comprehensive Agent Stack
Google is building a stack of AI agent-related technologies, with A2A serving as an important connection layer.
The major components of this stack are:
- Gemini Models: Foundation LLMs providing advanced reasoning capabilities to agents1
- Agent Development Kit (ADK): An open-source framework simplifying agent development6
- Agent Engine: Runtime for deploying, scaling, and managing agents3
- Agentspace: Enterprise platform for utilizing agents within organizations1
- AI Agent Marketplace: A place facilitating discovery and adoption of partner agents7
Within this, A2A functions as the "lubricant" ensuring interoperability between different components, especially between agents from different vendors. It's sometimes described as the "connecting tissue."
The True Intent Behind the "Open" Strategy
Google emphasizes A2A as an "open protocol" and highlights support from over 50 partner companies1. This has clear strategic intentions.
Without a truly open protocol, it would be difficult to achieve widespread adoption, especially in an environment with competitors like AWS, Microsoft, and Anthropic. By declaring it "open" and involving numerous partners, Google aims to establish A2A as the de facto standard and build a Google Cloud-centered ecosystem.
This aligns with my business experience. When providing technical courses at KikaGaku, I always considered the balance between "openness" and "specialization." The strategy was to widely share the foundational framework while providing unique know-how and added value on top, balancing lock-in and adoption.
However, a "gentle lock-in" is visible beneath the surface. While A2A is indeed open, it has the highest integration with Google's ADK, Agent Engine, and Vertex AI3. This means that adopting A2A could naturally increase dependence on Google's broader stack. This is strategically understandable but something users should be aware of.
Partner Ecosystem Strategy
A2A's partner ecosystem consists of multiple categories:
Most notably, the ecosystem includes major enterprise SaaS vendors (Salesforce, SAP, ServiceNow, etc.). This indicates that A2A is not merely an experimental technology but focuses on automating enterprise business processes.
The participation of major system integrators is also important. System integrators are responsible for implementing AI in actual companies, and their support for A2A could accelerate adoption in many large enterprises.
A2A's "openness" and extensive partner ecosystem will significantly contribute to the protocol's adoption. However, how truly "open" its evolution and management will remain is something that needs careful observation.
Business Value and Practical Examples: The Potential of Multi-Agent Collaboration
Having understood the technical aspects of A2A, let's consider what concrete value this technology brings to actual businesses and organizations. The potential for collaborative work among multiple agents is particularly interesting.
Core Business Value Delivered by A2A
The most important values that the A2A protocol brings to businesses are:
From my organizational transformation experience, "breaking silos" and "distributed expertise" are particularly important elements. In many companies, lack of coordination between departments and systems creates major barriers to operational efficiency and customer experience. A2A can function as a technological approach to addressing this problem.
What I've consistently focused on at KikaGaku and Wadan is "creating an environment that maximizes human knowledge and skills." A2A could be a means to achieve this in the digital space. In human organizations, effective collaboration between specialists produces great results, and the same can now be possible between AI agents.
Analysis of Specific Use Cases
Automating the Recruitment Process
Google's example of the recruitment process clearly demonstrates A2A's value:
- Manager instructs their personal agent
- Personal agent uses A2A to delegate tasks to specialized agents:
- Requests candidate searches from an external recruiting agency agent
- Requests interview coordination from a scheduling agent
- Requests investigations from a background check agent
- Each specialized agent returns results to the personal agent
- Personal agent presents integrated information to the manager
The notable aspect of this process is that multiple services and systems spanning both inside and outside the organization collaborate. Tasks that were traditionally manual—candidate searches, interview scheduling, background checks—are largely automated through agent collaboration.
Customer Service Escalation
Another promising use case is intelligent escalation in customer service:
- Customer inquires to a general support agent
- General agent analyzes the question and determines it requires specialized knowledge
- Uses A2A to transfer to appropriate specialist agent (technical, billing, product, etc.)
- Specialist agent generates an answer and returns it to the general agent
- General agent provides a consistent answer to the customer
The benefit of this approach is that while from the customer's perspective there is a single interface, multiple specialist agents are collaborating in the backend. This enables providing consistent, high-quality answers even to complex problems.
Evolution of Agent Collaboration Technology
A2A's technical evolution extends beyond just the emergence of a new protocol. Considering its relationship with past technologies and research, A2A can be positioned as part of the evolution of agent collaboration technology.
Evolution of Agent Collaboration Technology
Early Multi-agent System Research
Foundational research in academic domains
Rise of Microservice Architecture
Development of service-to-service communication concepts
Widespread Adoption of Large Language Models (LLMs)
Technical foundation established for autonomous AI agents
July
Anthropic Announces Model Context Protocol
Proposes standard integration between agents and tools
April
Google Announces Agent2Agent (A2A) Protocol
Proposes standard for interoperability between agents
Application to Education and Talent Development
From my perspective as an AI education business operator, A2A's application to education and talent development is also very interesting.
The collaboration of diverse educational agents could provide personalized learning experiences. For example:
- Learning Plan Agent: Creates plans tailored to the learner's goals and pace
- Content Agent: Provides materials specialized for topics or skills
- Assessment Agent: Measures learning progress and provides feedback
- Mentoring Agent: Maintains motivation and provides human-like support
One of the challenges I faced at KikaGaku was building a flexible educational model that addresses learners' diverse needs. The multi-agent approach could make this ideal of "education tailored to each individual" more achievable.
Critical Analysis: Considering A2A's Future Potential
Having examined A2A's technical aspects and business value, it's important to view it critically as well. Let's analyze the challenges A2A faces and its long-term prospects.
Factors Promoting A2A's Success
What factors support the possibility that A2A will be widely adopted and become a true standard?
Particularly noteworthy is that it's built on existing standard technologies. Being based on widely adopted technologies like HTTP, Server-Sent Events, and JSON-RPC lowers the learning curve and makes it relatively easy to integrate into existing stacks.
Also significant is the strong backing of Google. They're not just proposing a protocol but also simultaneously providing tools like ADK and Agent Engine, creating an environment that makes it easier for developers to adopt A2A.
Concerns and Barriers
On the other hand, there are barriers that could prevent widespread adoption of A2A:
The most serious concern is security and reliability8. Enabling autonomous agents to communicate and exchange data and instructions creates significant security challenges. How to establish robust authentication between agents from different organizations? How to maintain data privacy? How to prevent protocol abuse by malicious agents?
There could also be criticism of "over-engineering"9. Many use cases might be adequately addressed using existing patterns (REST APIs or simpler orchestration methods). A2A might be solving problems that aren't yet pressing for many of today's organizations.
Future Scenario Analysis
Several possibilities can be considered for A2A's long-term future:
Scenario 1: De Facto Standard
In this scenario, A2A is widely adopted and becomes the primary method for agents to interoperate across different vendors and platforms. However, considering the possibility of competing approaches from major players (Microsoft, AWS, OpenAI, Anthropic) and the challenges of A2A's governance model, the probability of it becoming a complete industry standard doesn't seem very high based on current information.
Scenario 2: Google Ecosystem Standard
More likely is a scenario where A2A is widely adopted within Google Cloud and closely affiliated partner ecosystems but has limited adoption elsewhere. It would become the de facto standard for customers using Google Cloud and Vertex AI but remain just one option among many on other cloud platforms.
Scenario 3: Niche Utilization for Specific Applications
A2A might be used in specific industries or very complex use cases without achieving general adoption. It could particularly be valued in heavily regulated industries like financial services or healthcare, where complex interdepartmental or inter-organizational collaboration is needed.
Scenario 4: Stepping Stone/Influence on Future Standards
Perhaps the most interesting scenario is one where A2A influences future standardization efforts while eventually being superseded by a more mature or more broadly supported protocol. In this case, A2A's concepts and design principles would be carried forward into the next generation of standards.
Considering these scenarios, the most likely in the medium term is "Scenario 2: Google Ecosystem Standard." A2A adoption will likely progress particularly in companies already utilizing Google's AI stack. In the long term, "Scenario 4: Stepping Stone" is also highly possible, with lessons learned from A2A potentially leading to the formation of a more comprehensive industry standard.
Fundamental Security and Governance Challenges
The most essential challenge facing A2A lies in security and governance. The ability of autonomous agents to interact across organizational boundaries can potentially create complex security and trust issues8.
While the protocol may define how to communicate, establishing robust and scalable trust frameworks and ensuring safe operation in production environments could be major barriers to widespread adoption, especially for sensitive enterprise data and processes. Beyond basic questions of how Agent A verifies the legitimacy and authority of Agent B and how data in transit is protected, there are risks of misuse by rogue agents due to autonomy that go beyond standard API security measures.
From my perspective in AI education and organizational transformation, these challenges relate to not just technology but also organizational culture and trust-building processes. In parallel with technical security mechanisms, building trust relationships and governance models between organizations will also be necessary.
Of course, this issue is not unique to A2A but is common to AI agents and autonomous systems in general. How A2A addresses this problem and what approaches competitors take is a point to watch as technical evolution continues.
The Tension Between Open Standards and Corporate Strategy
When evaluating A2A as an "open standard," we need to carefully consider the definition and reality of "openness".
It's not uncommon in the technology industry for vendor-led standards to become means of building their own "moat"9. While Google is certainly publishing A2A specifications and sample code as open source, its evolution and direction are likely to proceed in line with Google's strategic interests.
A truly open protocol requires decision-making mechanisms independent of specific vendors, transparent development processes, and substantial involvement from diverse stakeholders. The extent to which A2A will meet these conditions will be an important factor determining its future.
Recommendations for Organizational Adoption: Who, When, and How
Having examined A2A's technical aspects and future potential, I'd like to offer concrete recommendations for how organizations can adopt and utilize this technology. Particularly important are the perspectives of who should be interested in A2A, when they should consider adoption, and how they should implement it.
Stakeholders Who Should Be Involved
Key stakeholders who should take interest in and deepen their understanding of A2A include:
Software Developers & AI Engineers (Priority: High)
Developers building AI agents, especially multi-agent systems or agents requiring integration with external functionalities, should understand A2A and learn how to implement it3. They need to know how to implement A2A clients/servers in frameworks like ADK, LangGraph, CrewAI, etc.
Enterprise Architects & IT Strategists (Priority: High)
Specialists responsible for designing organizational systems and formulating technical strategies should evaluate how A2A might impact their IT landscape, integration strategy, and vendor selection. Especially for companies pursuing multiple AI initiatives in parallel, A2A is worth considering as a framework for their interconnection.
Platform Vendors (Priority: Medium)
SaaS, PaaS, and cloud providers that offer platforms where agents might run or interact should consider whether to support A2A for interoperability. The initial partner list indicates the industry's direction.
System Integrators & Consultants (Priority: Medium)
System integrators and consultants who help companies implement complex solutions should accumulate expertise on A2A to design and build multi-agent solutions for clients.
AI Transformation Leaders & Innovators (Priority: High)
Those responsible for promoting AI utilization within organizations (CDOs, CTOs, innovation leaders, etc.) should understand A2A's possibilities and challenges and explore opportunities for pilot implementation.
Phased Adoption Approach
When considering A2A adoption, a phased approach based on the organization's situation and goals is important.
Phased Approach to A2A Adoption
Recognition and Education
Understand A2A concepts and educate technical teams and decision-makers. Period for monitoring trends and gathering information
Experimentation and PoC
Conduct small-scale experiments in limited environments. Prototype development and feasibility verification
Pilot Implementation
Limited implementation in specific departments or business processes. Verification of actual business value
Expansion and Integration
Horizontal expansion based on success stories. Strengthened integration with existing systems and processes
Ecosystem Formation
Building a comprehensive ecosystem through agent collaboration inside and outside the company. Creation of new business models
Prioritization for Adoption
When companies consider A2A adoption, they should determine priorities based on the following evaluation criteria:
- Degree of complexity and siloing: Areas with complex and fragmented departmental and system coordination will derive greater value from A2A
- Existing AI agent utilization: Organizations already utilizing multiple AI agents will benefit more from early adoption
- Technical maturity: Experience with adopting new technologies and minimal technical debt are important prerequisites for success
- Automation needs: The stronger the need to automate complex workflows and multi-step processes, the higher the value of A2A
- Data privacy and security requirements: When security requirements are high, a cautious approach to A2A adoption is necessary
Adoption Strategy and Implementation Best Practices
When implementing A2A, I recommend considering the following best practices:
1. Start with Clear Use Case Definition
Begin with specific problems or challenges and clarify how A2A can contribute to their solution. It's important to focus on actual business value rather than "technology for technology's sake."
2. Accumulate Small Successes Incrementally
Take an approach of accumulating small successes rather than large-scale transformation. Demonstrating concrete results makes it easier to gain support within the organization.
3. Place Security at the Core of Design
Make security and privacy core design elements rather than afterthoughts. Robust authentication and permission management are essential, especially for agent collaboration crossing organizational boundaries.
4. Promote Collaboration Between Teams
Foster close cooperation between technical and business teams. Not only technical implementation but also business process redesign is important to maximize A2A's potential value.
5. Clarify Agent Roles and Responsibilities
Clearly define each agent's role, responsibilities, and capabilities. This enables appropriate task distribution and efficient collaboration.
6. Ensure Monitoring and Transparency
Build mechanisms to monitor interactions between agents and make their operations transparent. This enables early problem detection and continuous improvement.
Education and Talent Development Perspective
Based on my AI education business experience, I particularly want to emphasize the importance of talent development. When introducing advanced technologies like A2A, not only implementing the technology but also developing talent that can effectively utilize it is essential.
I believe the following skill sets will be particularly important:
- Systems thinking: Ability to holistically understand complex systems where multiple agents collaborate
- Protocol design: Technical knowledge to design effective agent-to-agent communications
- Security thinking: Ability to identify potential risks and implement appropriate countermeasures
- Process optimization: Ability to redesign business processes in forms suitable for agent collaboration
- Change management: Ability to manage organizational and cultural changes accompanying new technology introduction
Since these skills aren't sufficiently covered by traditional IT education, conscious development within organizations or collaboration with external experts will be necessary.
The Dawn of the Agent Collaboration Era
In this article, I've analyzed Google's Agent2Agent (A2A) protocol from multiple angles including its technical aspects, business value, and future potential. A2A is not merely a technical specification but has the potential to become the foundation for a new automation paradigm realized through AI agent collaboration.
A2A as a Bridge Between Technology and Organization
The most interesting aspect of A2A is that it addresses not just technical problems but also organizational challenges. Standardizing agent collaboration is technically about connecting different systems, but organizationally it's equivalent to connecting different functions or departments.
This deeply resonates with my mission at Wadan of "organizational transformation through the redefinition of dialogue." The challenge of improving organizational communication and breaking down silos is essentially similar to A2A's goal of standardizing "dialogue" between agents.
"AI as a System" Composed of Multiple Agents
Future AI will likely evolve not as single models or agents but as "AI as a System" consisting of multiple collaborating agents. This represents a paradigm shift away from creating a single all-capable AI towards building intelligent systems as collections of specialized AIs.
A2A and MCP form important foundations for this new paradigm. A2A handles agent-to-agent collaboration while MCP handles agent-to-tool integration, together enabling a more flexible and extensible AI ecosystem.
Unresolved Challenges and Future Outlook
While A2A offers great potential, many challenges remain unresolved. In particular, security and governance issues will likely be the biggest barriers to widespread adoption. The future of standardization is also a point to watch.
In the medium term, adoption will progress within Google's AI ecosystem, and its value will be proven in specific complex use cases. In the long term, lessons learned from A2A may lead to the formation of more comprehensive industry standards.
"Collaboration" is the Key to Next Technological Innovation
Looking back at the history of technology, true breakthroughs have often come not just from the evolution of individual technologies but from how effectively they integrate. The Internet protocol stack, microservice architecture, and today's cloud ecosystem are examples.
Similarly, A2A will play an important role as a solution to the challenge of how effectively AI agents can collaborate, not just improving individual AI agent capabilities. In this sense, A2A is not merely a technical standard but an important step toward a collaborative future for AI.
The true transformation that AI brings to business and society will likely emerge not just from the capabilities of individual models but from "collaborative intelligence" where agents with various specialties work together. A2A is a technology that helps expand this possibility.
References
Footnotes
-
"Google Unfurls Raft of AI Agent Technologies at Google Cloud Next '25", Techstrong.ai ↩ ↩2 ↩3 ↩4
-
"Announcing the Agent2Agent Protocol (A2A)", Googblogs.com ↩
-
"Build and manage multi-system agents with Vertex AI", Google Cloud Blog ↩ ↩2 ↩3 ↩4
-
"Announcing the Agent2Agent Protocol (A2A)", Google for Developers Blog ↩ ↩2 ↩3 ↩4
-
"Agent2Agent: Google announces open protocol so AI agents can talk to each other", SiliconAngle ↩
-
"Agent Development Kit: Making it easy to build multi-agent applications", Google Developers Blog ↩
-
"Google Cloud Next 2025: News and updates", Google Blog ↩
-
"AI Agent Communication: Breakthrough or Security Nightmare?", Deepak Gupta ↩ ↩2