Insight

The Social Burden of Unethical AI Automation: Balancing Technology and Ethics

The Future of AI Communication Through the Lens of Sender-Receiver Cost Asymmetry

May 14, 202519 min
AI and Ethics
Social Impact of Technology
Communication Asymmetry
AI Automation
Communication Strategy
From Zero-Sum to Positive-Sum
Ryosuke Yoshizaki

Ryosuke Yoshizaki

CEO, Wadan Inc. / Founder of KIKAGAKU Inc.

The Social Burden of Unethical AI Automation: Balancing Technology and Ethics

How AI Technology Has Transformed Communication Cost Structures

Recently, AI-generated sales emails and calls have increased rapidly. Website inquiries, which once had slight technical barriers, can now be easily automated using AI, leading to an explosion of templated sales activities. Observing this phenomenon, I've been compelled to consider the gap that emerges between what is technically possible and what is ethically appropriate.

Looking back at the history of communication, there has always existed a certain balance between the costs of "writing/sending" and "reading/receiving." For example, in the era of handwritten letters, both writing and sending required significant cost and effort. Consequently, senders carefully considered their content and only contacted truly necessary recipients. This balance gradually shifted with the spread of telephones and email, but as long as humans remained involved, certain time and monetary costs still existed.

However, AI technology has caused a fundamental change in this cost structure.

I myself receive sales emails and interview requests daily that are clearly AI-generated. The sender might be human, but the impersonal content and vague salutations like "Dear Representative" reveal a lack of genuine research or interest. Technologically, this is certainly impressive progress, and many developers likely have efficiency as their well-intentioned goal. But does this truly provide communicative value in any meaningful sense?

The proliferation of these automated communication tools creates a marked asymmetry between senders and receivers. While it's efficient and virtually costless for senders, recipients are forced to spend time and effort dealing with an increasing volume of low-relevance communications. This asymmetry has begun to place a significant burden on society as a whole.

Directionality of Communication and Structural Burden

When considering AI communication automation, its directionality holds a key role. Communication has different directional patterns like "one-to-many" and "many-to-one" (as well as one-to-one and many-to-many).

The "one-to-many" direction describes a pattern where information flows from one source to many recipients. For example, when a company sends a newsletter to many users. In this pattern, recipients can selectively process information based on their interests, so the burden is relatively distributed (though spam remains problematic).

Conversely, in "many-to-one" communication, information from multiple sources converges on a single receiving point. AI phone reservation systems and automated sales inquiries fall into this pattern. I see social challenges beginning to emerge specifically in this structure.

Consider the example of AI phone reservation systems recently reported in the news1. This system makes calls to restaurants that don't offer online reservations and books tables on behalf of users. While it's a convenient service for users, and developers' intentions of improving customer experience are well-meaning, the perspective of restaurants receiving these calls is quite different. In fact, there have been reports of issues such as "reservations based on incorrect information" and "phones ringing continuously for 30 minutes"1.

Unlike humans, AI incurs almost no cost, making it technically possible to make 100 or even 1 million calls in the time it would take a human to make just one. A situation could arise where a single reservation staff member is inundated with a flood of calls.

It's noteworthy that this structure bears a structural similarity to DDoS attacks in the context of web technology. Though the intentions are entirely different, they share a common challenge: "concentrated demands exceeding the processing capacity of limited resources." Recognizing this structural similarity provides an important perspective in designing social systems.

Of course, companies developing AI calling services don't have malicious intentions; rather, they are developed with the benevolent aim of improving user experience. However, we need to recognize that social burdens can emerge as unintended consequences of technology.

The Importance of Ethical Judgment in Technological Possibilities

Witnessing the rapid development of AI and technology, the importance of ethical judgment between "what can be done" and "what should be done" becomes increasingly pronounced. Aristotle's concept of "phronesis" (practical wisdom) is particularly relevant in such situations. Phronesis refers to the practical wisdom of making appropriate judgments according to circumstances, encompassing not just "what can be done" but also "what should be done" as an ethical judgment.

In modern times, while technically possible actions are rapidly expanding, ethical impact assessments often fail to keep pace. AI automation is a prime example. Even when technically possible, there are many cases where implementation should be approached cautiously when considering social impact.

Thinking through the DIKIW pyramid (Data-Information-Knowledge-Intelligence-Wisdom hierarchy), current AI technology demonstrates excellent capabilities at the levels of data, information, and some knowledge. However, at the highest "wisdom" level, especially in the realm of wisdom that includes ethical judgment, the human role remains indispensable.

In this era, the attitude we should adopt when developing and implementing technology is to balance the pursuit of technological possibilities with ethical judgment. And this judgment is not something to be made solely within companies but should be developed through dialogue and consensus formation across society.

To ethically evaluate the impact of AI communication technology on society, we need to consider not just short-term efficiency but also long-term social consequences. For example, we should consider the impact on the information ecosystem where truly valuable messages get buried amid increasing indiscriminate sales communications.

We are in a technological transition period, and the optimal solution is not yet clear. But that's precisely why ethical thinking and dialogue are important. Asking questions like "who will this technology affect and how?" and "are burdens and costs distributed fairly?" before implementing technology will lead to building a more sustainable technological society.

Balancing Business Efficiency and Social Responsibility

There's no doubt that AI automation increases business efficiency. However, if the pursuit of that efficiency imposes a burden on society as a whole, can it truly be called "efficiency"? I want to consider the paradox where a single company's efficiency creates inefficiency for society as a whole.

For instance, indiscriminate AI communication might seemingly improve sales activity efficiency. However, by forcing time costs on many unrelated companies and individuals, it could be creating clear inefficiency for society as a whole.

This situation is close to what economists call a zero-sum game, or rather a negative-sum game. A zero-sum game refers to a situation where one party's gain equals another's loss (the sum total becomes zero). In contrast, a positive-sum game is a situation where all parties can benefit. The unilateral cost transfer through AI automation could result in a negative outcome for society as a whole by imposing a large burden on recipients in exchange for a small benefit to senders.

Particularly noteworthy is that AI utilization reduces marginal costs to nearly zero. Marginal cost refers to the additional cost incurred to produce or provide one more unit of a good or service. In this example, it means the additional cost required to send one more sales email or make one more sales call. Human sales activities always incur personnel costs, so marginal costs in the form of time and effort arise with each additional contact, naturally leading to selection of targets and scrutiny of content. However, with AI, this marginal cost approaches nearly zero, making it technically possible to send indiscriminately to large numbers with little cost difference between sending to 100 or 10,000 recipients. This change in cost structure brings new considerations to traditional business etiquette.

Evolution of Communication Technology and Changes in Cost Structure
Early 1900s

Era of Letters & Telegrams

Writing Cost ≒ Reading Cost (both high)

Late 1900s

Era of Telephone & Fax

Sending Cost > Receiving Cost (certain constraints on senders)

2000s

Era of Email & Web

Sending Cost << Receiving Cost (beginning of balance collapse)

2020s

Era of AI Automation

Sending Cost ≒ 0 << Receiving Cost (extreme asymmetry)

These changes in cost structure affect not just individuals and companies but also the design of social systems as a whole. For example, as mentioned earlier, some restaurants have been forced to improve their reservation systems to cope with the increase in AI calls1. Such responses can also be considered part of the social cost.

At the same time, it's notable that this situation may be a phenomenon specific to technological transition periods. When new technologies emerge, they often go through a period of excessive use before social consensus forms around appropriate usage methods. Email and social media experienced similar processes. AI communication automation might follow the same path.

However, to minimize the social costs and confusion that occur in the meantime, it's important for technology developers, business leaders, and users to collectively maintain an ethical perspective and engage in ongoing dialogue. The greater the power of technology, the greater the responsibility—this needs to be recognized.

The Impact of AI Communication on Brand Image

Ironically, AI automation pursued for efficiency often has the counterproductive effect of damaging a company's brand image. In my experience, AI-generated sales emails and inquiries have in many cases created an impression that makes me want to avoid doing business with those companies.

I have to ignore many of the auto-generated emails arriving in my inbox. Why? Because they lack any sense of genuine interest in me as a recipient. I only respond to emails where the sender clearly demonstrates specific understanding of me and my activities. For example, I take time to reply to emails that reference specific content from my articles or talks and make meaningful proposals based on them.

Why does this counterproductive effect occur? Because the essence of communication isn't merely information transfer but relationship building. Impersonal messages generated by AI or template-following messages sent by humans don't convey genuine interest or respect for the recipient. Salutations like "To Whom It May Concern" and content that doesn't clarify why that company or individual was selected don't communicate sincere intent to connect.

In building relationships with customers and business partners, the most important thing is an attitude of understanding and respecting the other party. Especially at the initial contact stage, if this attitude isn't clearly demonstrated, there's almost no possibility of establishing a good relationship.

What's important is not whether AI is being used, but how much genuine interest and respect is shown to the recipient. AI is just a tool, and how it's used is human responsibility. For example, it's possible to use AI while still thoroughly researching the recipient's needs and situation to deliver truly valuable, personalized proposals.

In fact, if AI capabilities are utilized for deeper understanding of the target, it might enable higher quality communication. For instance, using advanced AI research tools like Deep Research to deeply understand the specific needs and interests of the recipient could actually contribute to relationship building.

Ultimately, the balance between efficiency and quality is crucial. Rather than pursuing short-term efficiency by sending indiscriminately automated messages, focusing on high-quality communication with fewer recipients often leads to better long-term business outcomes. How to strike this balance partly depends on each company's values and strategy, but at minimum, it should be recognized that the simple equation "AI automation = efficiency = good thing" doesn't hold true.

Designing Better AI Communication

As AI-powered communication inevitably increases, how can we balance technological possibilities with ethical appropriateness? I believe the following principles are important:

1. Center on Value Creation for Recipients

AI-powered communication must provide clear value to recipients. Simply improving sender efficiency isn't enough for sustainable communication.

Constantly asking yourself "Does this communication truly provide value to the recipient?" can help design better communication. For example, the goal should be to provide truly useful information to people who are genuinely likely to be interested. This isn't just an ethical consideration but directly impacts long-term business outcomes.

2. Appropriately Limit Targets

The low marginal cost of AI creates the temptation to cast too wide a net. However, to achieve truly valuable communication, it's essential to appropriately limit targets.

Modern AI technology actually holds greater potential for precise targeting and personalization than for indiscriminate mass sending. Using AI research tools, it's possible to gather detailed information about each target and deliver precise messages only to those with high relevance. AI's improving capabilities should be utilized to enhance communication quality rather than quantity.

3. Be Conscious of Fair Load Distribution

It's important to recognize the cost asymmetry between the "sending/writing" and "receiving/reading" sides of communication and be conscious of fair load distribution. Especially in "many-to-one" communication types, designs that respect the resource constraints of recipients are necessary.

Technically, solutions could include rate limiting (restricting the number of messages that can be sent within a certain time period) and prioritization mechanisms. It's also important to explicitly provide opt-out mechanisms to guarantee recipients' right to self-defense.

Toward Ethical AI Utilization in a Transitional Period

The possibilities for AI automation will continue to expand. We are currently in a technological transition period where social norms and rules haven't yet been fully formed. In such circumstances, the importance of ethical judgment by individual companies and developers is heightened.

As technologists, we must constantly question not just "what can be done" but "what should be done." And business leaders need to make decisions from the perspective of long-term relationship building and social responsibility, not just short-term efficiency.

Eventually, the evolution of technology and societal adaptation will find a point of harmony. Past technological innovations have also established appropriate usage methods and norms after an initial period of confusion. AI communication will likely follow a similar process.

The most important thing in this transition period is dialogue and mutual understanding between technology developers, companies, users, and society as a whole. A better balance can be found when each stakeholder tries to understand not just their own perspective but also the positions of others.

As communication automation advances with AI evolution, we should always remember human-centered values. Technology should expand human possibilities and support more valuable connections between people. This requires design and implementation that constantly balances technological possibilities with ethical considerations.

Perhaps this effort to find an equilibrium point is the important mission entrusted to us technologists and business people in a future where AI and humans coexist.

References

Footnotes

  1. "Incorrect information posted" "Phone ringing for 30 minutes straight" Restaurants still troubled by AI phone reservations even after controversy 1.5 years ago - According to an investigation by ITmedia Business, there have been reports of restaurants experiencing operational disruptions due to AI phone reservation systems. 2 3

AI and Ethics
Social Impact of Technology
Communication Asymmetry
AI Automation
Communication Strategy
From Zero-Sum to Positive-Sum
Ryosuke Yoshizaki

Ryosuke Yoshizaki

CEO, Wadan Inc. / Founder of KIKAGAKU Inc.

I am working on structural transformation of organizational communication with the mission of 'fostering knowledge circulation and driving autonomous value creation.' By utilizing AI technology and social network analysis, I aim to create organizations where creative value is sustainably generated through liberating tacit knowledge and fostering deep dialogue.