Business

Has AI Stolen Human Intelligence? Redefining the DIKW Hierarchy in the Generative AI Era

The Essence of Continuous Thinking: Knowledge System Transformation Based on Latest Empirical Research

2025-04-09
27 min
ChatGPT
Generative AI
Educational Transformation
AI Collaboration
Education Paradigms
Learning Design
Knowledge Hierarchy
Ryosuke Yoshizaki

Ryosuke Yoshizaki

CEO, Wadan Inc. / Founder of KIKAGAKU Inc.

Has AI Stolen Human Intelligence? Redefining the DIKW Hierarchy in the Generative AI Era

Introduction: The Significance of "Thinking"

The manga "Chi. -On the Movement of the Earth-" contains a profound quote that sharply captures the essence of intelligence:

"Think. That's why you learn letters. Read books. Not to 'become knowledgeable.' But to 'think.' Find connections between seemingly unrelated pieces of information. Transform mere information into usable knowledge. Intelligence resides in that process."

Encountering these words marked a turning point in my thinking. This fictional story set in 15th century Europe, depicting people risking their lives for the heliocentric theory (then considered heresy), bears striking structural similarities to today's AI revolution.

In the age of religion, God provided teachings, but scientific thinking required humans to contemplate these matters themselves. Similarly, in the AI age, AI provides knowledge, but value judgments as humans must still be deeply considered by humans themselves.

When I realized this structural similarity, the contemporary "crisis of knowledge and intelligence" became clearer to me. Just as during the great transition from religion to science, when the importance of thinking for oneself rather than relying on authority was questioned, we now face a new "authority" in AI and are once again examining the essential nature of the act of "thinking."

As someone who has developed AI education businesses, a hypothesis has emerged in my daily confrontation with this question: Has ChatGPT taken not just "knowledge" but "intelligence" from humans? The tendency to delegate to AI not only the search and organization of knowledge but also the "intelligence" domain of finding connections between information has become particularly evident in educational settings.

In this article, I will explore the impact of generative AI on human "intelligence" using the hierarchical structure of "data, information, knowledge, intelligence, and wisdom" as a guide. I aim to discover the unwavering value of "continuous thinking" in the AI era by integrating philosophical considerations with the latest empirical research.

Reconsidering the Hierarchy of Knowledge

Evolution and Criticism of the DIKW Model

Attempts to understand our intellectual activities as hierarchical structures have been repeated throughout the history of information science. A representative framework is the DIKW model (Data, Information, Knowledge, Wisdom), which became widely known through Russell Ackoff's 1989 paper1 and has influenced fields from information science to management and education.

The DIKW model shows a hierarchical structure that progresses from "data," the most basic element, to increasingly higher intellectual activities. However, this model has also faced significant criticism. In 2009, Martin Frické criticized that the "DIKW hierarchy is unsound and methodologically undesirable" and pointed out its logical flaws2.

According to Frické, the DIKW model implicitly relies on an inductivist idea (the notion that gathering more data automatically leads to knowledge and wisdom) and warns that it undervalues creative and critical thinking2. Additionally, there are criticisms that the definitions of each concept (data, information, knowledge, wisdom) are circularly referential and difficult to define precisely3.

図表を生成中...

Positioning of "Intelligence" in the DIKIW Model

In response to such criticisms, Anthony Liew proposed the DIKIW model in 20134. In this model, a layer of "Intelligence" is added between knowledge and wisdom.

Liew's main reason for adding the "intelligence" layer was that he found the leap from knowledge to wisdom too unnatural. He pointed out that intelligence should be positioned as an intermediate stage that applies and evaluates knowledge before elevating it to wisdom4. This approach suggests that merely accumulating knowledge does not lead to wisdom; one reaches wisdom only after going through the process of "intelligence," which utilizes and evaluates knowledge.

The definitions of each layer in the DIKIW model are as follows:

  1. Data: Raw records without context or interpretation; simply collections of symbols or signs. Examples include values like "32.5," "Tokyo," "March 15."

  2. Information: Data that has been given meaning or connected, becoming useful messages in context. For instance, "The temperature in Tokyo on March 15 was 32.5 degrees."

  3. Knowledge: Understanding or skills obtained when human minds digest information. Includes three elements: "know-what," "know-how," and "know-why"5.

  4. Intelligence: The ability to apply knowledge, recognize patterns, reason, and solve problems. The sum of mental processes including "learning and recognizing patterns," "thinking logically and critically," "creating ideas," and "making appropriate judgments"4.

  5. Wisdom: The insightful ability to accurately see the essence of things and make value judgments and decisions based on knowledge and intelligence. Defined by three elements: "understanding universal truths," "correct judgment," and "appropriate execution"6.

An important point about this DIKIW model is that the relationships between layers are not simple unidirectional hierarchies but dynamic interactions. Intelligence is the application of knowledge while also being involved in acquiring new knowledge. Similarly, wisdom is based on intelligence while also directing the working of intelligence.

Aristotle's Phronesis and Modern Intelligence Theory

Aristotle's classification of knowledge from ancient Greece remains insightful when considering the relationship between intelligence and wisdom. Aristotle classified human knowledge into three forms: "episteme (scientific knowledge)," "techne (technical knowledge)," and "phronesis (practical wisdom)"7.

Particularly noteworthy is the concept of phronesis. This is not merely knowledge or technique but "the ability to judge what is best for people and communities"8. Aristotle stated that "prudence is an intellectual virtue that grasps truth related to action"9, connecting wisdom with real-world actions and ethics.

Compared to the modern DIKIW model, Aristotle's phronesis (practical wisdom) can be understood as a concept that spans both "intelligence" and "wisdom." Phronesis includes both judgment and application ability (characteristics of intelligence) and decision-making based on ethical values (characteristics of wisdom).

The Complex Structure and Function of "Intelligence"

The Multidimensionality of Intelligence

"Intelligence" in the DIKIW model is not a single ability but a complex of multiple cognitive functions and mental activities. Integrating intelligence research literature reveals that intelligence has at least four main components:

図表を生成中...
  1. Pattern Recognition: The ability to identify regularities or patterns in data or information. This is also a core function of machine learning.

  2. Reasoning Ability: The ability to logically derive conclusions from known premises. This includes deductive reasoning, inductive reasoning, and analogical reasoning.

  3. Creative Thinking: The ability to combine existing knowledge or concepts in new ways or to generate ideas beyond conventional frameworks.

  4. Judgment: The ability to select optimal actions or decisions from multiple options. Making appropriate judgments considering situations and contexts.

These components don't function independently but work in coordination. For example, to make excellent judgments, one needs to recognize situational patterns, reason logically, and sometimes devise creative solutions.

Substitutability of Intelligence Functions by AI

Modern AI, especially large language models (LLMs), has begun to substitute some of these intelligence functions. As shown in the figure above, each intelligence function can be handled by AI under certain conditions, but humans maintain superiority under different conditions, creating a complex relationship.

Generative AI like ChatGPT has demonstrated performance comparable to humans in parts of pattern recognition and reasoning ability by learning from vast text data. However, these domains are somewhat limited.

Meanwhile, in creative thinking, AI can generate "novelty" by combining existing elements, but paradigm-shifting creativity remains a human domain. Similarly, in ethical and value-based judgment, AI can make judgments based on programmed values but making ethical judgments based on a deep understanding of situations remains in the human realm.

In other words, AI is replacing the "shallow" parts of intelligence, but the "deeper" parts remain the exclusive domain of humans. The concern is the danger that humans might abandon all domains of intelligence as generative AI substitutes for "shallow intelligence".

The Changing Relationship Between AI and Human Intelligence

The Evolving Boundaries Between AI and Human Domains

The boundary between AI and human intellectual domains continuously changes with technological evolution. From the DIKIW model perspective, this change can be visualized as follows:

図表を生成中...

Around 2010, AI primarily focused on data processing and information organization. Search engines and databases are typical examples. In 2025, AI has completely conquered the knowledge domain and deeply penetrated parts of intelligence. In the future, most of intelligence may become AI's domain, potentially influencing parts of wisdom as well.

The problem is that what is technically possible does not necessarily align with what is desirable for human intellectual development. Even if generative AI can substitute many functions in the "intelligence" domain, humans completely delegating these functions to AI could risk intellectual decline in the long term.

The "Outsourcing Intelligence" Debate

Philosopher and education scholar Christoph Royer posed the question in his discourse: "Does ChatGPT herald the end of critical thinking in higher education? Is it a dangerous tool that promotes the outsourcing of humanity to machines?"10

This expresses concern about what might be called "outsourcing humanity"—a warning about the risk of outsourcing "thinking," an essential human activity.

Royer's research points out that while ChatGPT certainly has both amazing capabilities and limitations, recognizing its weaknesses can actually revitalize critical thinking10. For example, because ChatGPT often generates text with factual errors and logical inconsistencies despite sounding plausible, users must develop the ability to verify and scrutinize its content.

As both a technologist and educator, what concerns me most is the temptation of this "easy choice." Thinking inherently requires time and effort. Thinking through something with your own mind can sometimes be painful. ChatGPT liberates us from this "pain" and instantly presents "plausible answers."

But there is no future we envision at the end of abandoning thinking. There is no future we envision at the end of taking the easy way.

ChatGPT's Substitution of the "Thinking Process"

The most shocking aspect of ChatGPT is its ability to substitute the "thinking process" itself. While traditional search engines only provided fragments of information, ChatGPT simulates the entire process of "thinking."

For example, when considering a business challenge, ChatGPT can substitute processes like these:

図表を生成中...

This deeply involves the core functions of "intelligence" such as "problem solving," "reasoning," and "pattern recognition." The fact that AI can mimic not just providing knowledge but such thought flows gives me profound concern as an educator.

The reason the New York City Department of Education blocked ChatGPT from school networks was also clear: "While this tool can provide instant answers to questions, it doesn't build critical-thinking and problem-solving skills"11. This wasn't merely a measure to prevent cheating but a fundamental concern about the essence of learning.

Humans as "Goal Definers"

Meanwhile, the limitations of ChatGPT are becoming apparent. Current AI cannot define "what is important" for us. In other words, the role of "goal definer" still belongs to humans.

What I've observed through AI education business is that people who truly master AI are "those who have the ability to reach goals without AI." They know what to ask AI, how to evaluate AI's responses, and how to adapt AI's output to their purposes.

Returning to the words from the manga "Chi.," this shows that the ability to "find connections between pieces of information" remains important. AI provides vast information, but it is the human role to find relevance, grasp the essence, and transform it into valuable output.

Impact on AI Learning: Insights from Empirical Research

Bastani et al.'s Groundbreaking Research and "Desirable Difficulty"

Regarding how generative AI like ChatGPT affects learning, a team led by Bastani from the Wharton School of the University of Pennsylvania published groundbreaking research results in 202412. This research rigorously examined the impact of AI on mathematics learning with approximately 1,000 high school students.

In the experiment, students were divided into three groups to compare learning effects:

図表を生成中...
  1. Unlimited AI Use Group (GPT Base): A group that freely worked on tasks using a standard ChatGPT interface
  2. Constrained AI Use Group (GPT Tutor): A group using AI with guardrails that provided hints gradually as intended by teachers
  3. Non-AI Control Group: A group learning conventionally without using AI

The results were shocking. While performance improved while using AI (48% improvement in the GPT Base group, 127% improvement in the GPT Tutor group), when AI access was removed for a later test, the group that freely used AI performed 17% worse than the control group that didn't use AI12.

グラフを読み込み中...

This shows that students who relied on AI as a "wheelchair" lost the ability to walk on their own (problem-solving skills). In contrast, the GPT Tutor group with guardrails that minimized hints showed almost none of these negative effects.

The important implication from this research is that AI use without appropriate constraints may damage long-term learning ability in exchange for short-term efficiency improvements.

These results from Bastani et al. support the theory of "desirable difficulties" long pointed out in cognitive science13. According to this theory, including a certain degree of difficulty in the learning process promotes memory retention and deepens understanding. AI tools like ChatGPT risk eliminating this "desirable difficulty."

Designing Guardrails

The hope shown by Bastani et al.'s research is that appropriately designed guardrails might enable both AI's short-term efficiency and long-term learning. The GPT Tutor group demonstrates this possibility.

The following principles are important for effective guardrail design:

  1. Gradual Hints: Provide hints gradually rather than giving all answers at once
  2. Requiring Thought Processes: Ask for explanations of thinking processes, not just answers
  3. Iterative Feedback: Adaptively support based on learner responses
  4. Maintaining Appropriate Difficulty: Don't completely remove barriers; leave moderate challenges

AI utilization incorporating these principles holds the potential to balance short-term efficiency with long-term capability development.

The Evolution of Educational Paradigms

Historical Interactions Between Technological Innovation and Education

Looking back at the history of education reveals a close relationship between technological innovation and educational paradigms. The emergence of ChatGPT can also be understood within this long historical flow.

Major Turning Points in Educational Paradigms

Early 1900s

Rise of Progressive Education

The period when John Dewey advocated 'learning by doing' in 'Democracy and Education,' shifting from rote-learning education to experience-focused education

1957

Sputnik Shock

Western countries advanced STEM education reforms triggered by the Soviet Union's satellite launch, with strengthened science education implemented in the US and European countries

1983

'A Nation at Risk' Report (US)

The US Department of Education warned of declining educational standards, cautioning that 'unmastered basic skills pose a threat to national security.' This became the catalyst for standardized testing emphasis

1990s

Internet Revolution

Access to digital information transformed education, with the first online university in the US accredited in 1994. Information literacy emerged as a new educational goal

2012

Rise of MOOCs

The emergence of Massive Open Online Courses like Coursera and edX attracted hundreds of thousands of students, advancing the popularization of higher education and self-learning

2020

COVID-19 Pandemic

The 'largest disruption in educational history' affecting 90% of global learners (about 1.5 billion people). A period when online education rapidly accelerated

November 2022

Emergence of ChatGPT

Reached 100 million users within 2 months of launch. Shocked educational settings, with measures prohibiting school use in places like New York City. The beginning of exploring new educational models for the AI era

September 2023

UNESCO 'AI Utilization Guidelines' Released

A turning point when 'Guidelines for Generative AI Use in Education' were announced, calling on countries to develop teacher training and ethical standards

Analyzing this historical transition reveals several important patterns:

1. Coupling of Technological Innovation and Skill Definition: With each technological innovation, there has been a redefinition of "what should be learned." For example, the spread of printing technology shifted skill emphasis from memorization to reading comprehension. Today's AI revolution is similarly pushing education toward emphasizing "the ability to pose questions" and "the ability to evaluate AI outputs" while relatively devaluing "memorization" and "solving routine problems."

2. Shift in Educational Goals: Education once centered on "knowledge acquisition," but shifted to "information evaluation ability" in the Internet era, and now to "knowledge utilization and creation" in the AI era. In terms of the DIKIW model, this means the focus of education is moving to higher layers.

3. Changing Teacher Roles: The teacher's role has evolved from "knowledge transmitter" centered on lectures to "learning companion" providing learning support and facilitation. In the AI era, teachers' roles are further strengthening as mentors focused on "intelligence and wisdom domains."

Educational Models in the Generative AI Era

What model should education aim for in the generative AI era? From Bastani et al.'s research and analysis of educational history, the following directions emerge:

1. Evaluation Emphasizing "Thinking Processes": Evaluate not just "correct answers" but the thinking process leading to those solutions. For example, problem formats that require explanation of thought pathways or task designs that demand analysis from multiple perspectives become important.

2. Strengthening Project-Based and Inquiry-Based Learning: Leave routine problem-solving to AI and engage with more complex, creative projects. PBL (Project-Based Learning) addressing real-world problems becomes even more valuable in the AI era.

3. Designing "Desirable Difficulties": Intentionally incorporate appropriate difficulties into the learning process to cultivate thinking and problem-solving abilities. For example, setting limits on AI use or incorporating activities that critically verify AI answers.

4. Cultivating Ethical Judgment: Developing ethical judgment and decision-making abilities based on values related to the "wisdom" domain. Opportunities to learn the distinction between what is technically possible and what is ethically appropriate become important.

The educational model in the generative AI era should not only foster skills for utilizing AI but seek directions that "nurture human-specific abilities that transcend AI while coexisting with AI."

Cultivating Intelligence in the AI Era: Practical Approaches

Decision-Making for Effective Learning Design

To maximize learning effectiveness, it's necessary to appropriately design AI utilization methods according to goals and learner characteristics. The following decision flow serves as a guideline for effective learning design:

図表を生成中...

As this flowchart shows, AI utilization methods and constraints vary depending on the nature of learning objectives (basic knowledge acquisition, emphasis on thinking process, creative problem-solving), learner proficiency (beginner, intermediate, advanced), and assessment method (results-focused, process-focused, transfer skills-focused).

What's important is not the binary choice of whether AI can "always be used" or "never be used," but flexible usage design according to purpose. As Bastani et al.'s research shows, appropriate constraints on AI in learning that emphasizes thinking processes lead to long-term learning effectiveness.

Evaluation Methods Emphasizing "Thinking Processes"

In the AI era, evaluating the process leading to an answer becomes more important than the final "answer" itself. The following evaluation approaches are effective:

  1. Visualizing Thinking Processes: Have students explain their thinking process leading to the solution, not just the answer. For example, problems formulated as "Explain each step of your solution and the reasoning behind it."

  2. Requiring Multiple Perspectives: Ask for analysis from multiple viewpoints for a single problem. For example, "Present three different approaches to this problem and compare the strengths and weaknesses of each."

  3. Encouraging Refutation and Criticism: Make critical analysis of different opinions or solutions part of the evaluation. For example, "List three potential problems with this solution."

  4. Promoting Metacognition: Evaluation that encourages reflection on one's thinking process. For example, "What was the most difficult aspect of solving this problem, and how did you overcome it?"

These approaches demand that, even when using AI, students critically evaluate the AI's output and process it with their own thinking rather than simply submitting AI output as is.

Cultivating Critical and Creative Thinking

In the AI era, critical thinking (ability to evaluate existing information) and creative thinking (ability to generate new possibilities) become even more important. The following approaches can be considered for cultivating these abilities:

Methods for Cultivating Critical Thinking:

  • Have students intentionally verify AI outputs (e.g., identify factual errors in AI-generated articles)
  • Have students compare and analyze responses from different AIs
  • Have students reveal the assumptions and values behind AI

Methods for Cultivating Creative Thinking:

  • Collaborative creation projects between AI and humans (e.g., AI generates basic structure, humans add uniqueness)
  • Creative tasks with constraints (e.g., devising innovative solutions under specific conditions with AI assistance)
  • Cross-domain integration projects (e.g., integrating knowledge from different fields with AI help)

What's important is a metacognitive approach that develops one's thinking through dialogue with AI, rather than using AI as a mere tool. Education in the AI era should aim not just to "master AI" but to "think with AI and transcend AI."

Conclusion: The Value of Continuous Thinking

In the world of the manga "Chi.," the importance of "thinking for oneself" was depicted during the great transition period from religious authority to science. And now, in the age when a new authority, AI, is rising, we are once again confronting the essence of "thinking."

In the age of religion, God's teachings were absolute; in the age of science, verification by facts and logic became central. And in the AI age, "wisdom" is required to discern the essence from a sea of vast information and knowledge and make valuable judgments.

From the perspective of the DIKIW model, AI handles the layers of data, information, and knowledge, and is beginning to substitute parts of intelligence. However, the final domain of "wisdom" remains uniquely human. Isn't it our responsibility in the AI era to nurture, protect, and develop this domain?

In the contemporary world where generative AI deeply penetrates the "intelligence" domain, we have two paths. One is to continue making the "easy choice" of delegating thinking processes to AI. The other is the path of continuing to think for ourselves while utilizing AI, nurturing true intelligence and wisdom.

As empirical research shows, the "easy choice" may seem efficient in the short term. But in the long term, experiencing trial and error and thinking through with one's own mind is the only way to cultivate true intelligence and wisdom.

Returning to the words from "Chi.": "Think. That's why you learn letters. Read books. Not to 'become knowledgeable.' But to 'think.'" This advocates the importance of thinking through utilizing knowledge, not merely accumulating it. This essence doesn't change even in the AI era.

That's why consciously choosing the "not easy choice" at times and continuing to think with one's own mind is the path to nurturing true intelligence and wisdom.

What are you "continuing to think" about now?

References

Footnotes

  1. Ackoff, R. L. (1989). From Data to Wisdom. Journal of Applied Systems Analysis, 16, 3–9.

  2. Frické, M. (2009). The Knowledge Pyramid: A Critique of the DIKW Hierarchy. Journal of Information Science, 35(2), 131–142. 2

  3. Zins, C. (2007). Conceptual Approaches for Defining Data, Information, and Knowledge. Journal of the American Society for Information Science and Technology, 58(4), 479–493.

  4. Liew, A. (2013). DIKIW: Data, Information, Knowledge, Intelligence, Wisdom and their Interrelationships. Business Management Dynamics, 2(10), 49–62. 2 3

  5. Rowley, J. (2007). The Wisdom Hierarchy: Representations of the DIKW Hierarchy. Journal of Information Science, 33(2), 163–180.

  6. Bierly, P. E., Kessler, E. H., & Christensen, E. W. (2000). Organizational learning, knowledge and wisdom. Journal of Organizational Change Management, 13(6), 595–618.

  7. Aristotle. (4th century BC). Nicomachean Ethics. Book VI.

  8. Bratianu, C., & Bejinaru, R. (2023). From Knowledge to Wisdom: Looking beyond the Knowledge Hierarchy. Knowledge, 3(2), 196–214.

  9. Aristotle. (4th century BC). Nicomachean Ethics. Book VI, Chapter 5.

  10. Royer, C. (2024). Outsourcing Humanity? ChatGPT, Critical Thinking, and the Crisis in Higher Education. Studies in Philosophy and Education, 43(5), 479–497. 2

  11. Fox News (2023). NYC bans AI tool ChatGPT in schools amid fears of new cheating threat.

  12. Bastani, H., Bastani, O., Sungu, A., Ge, H., Kabakcı, Ö., & Mariman, R. (2024). Generative AI Can Harm Learning. Wharton School Research Paper. 2

  13. Bjork, R. A., & Bjork, E. L. (2011). Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning. Psychology and the real world: Essays illustrating fundamental contributions to society, 2(59-68).

Ryosuke Yoshizaki

Ryosuke Yoshizaki

CEO, Wadan Inc. / Founder of KIKAGAKU Inc.

I am working on structural transformation of organizational communication with the mission of 'fostering knowledge circulation and driving autonomous value creation.' By utilizing AI technology and social network analysis, I aim to create organizations where creative value is sustainably generated through liberating tacit knowledge and fostering deep dialogue.

Get the latest insights

Subscribe to our regular newsletter for the latest articles and unique insights on the intersection of technology and business.