表示調整
閉じる
挿絵表示切替ボタン
▼配色
▼行間
▼文字サイズ
▼メニューバー
×閉じる

ブックマークに追加しました

設定
0/400
設定を保存しました
エラーが発生しました
※文字以内
ブックマークを解除しました。

エラーが発生しました。

エラーの原因がわからない場合はヘルプセンターをご確認ください。

ブックマーク機能を使うにはログインしてください。
架空論文集  作者: 哀しみ
8/11

The Subjectless Language of AI and the Fallacy of Perception: A Philosophical Foresight of Societal Collapse

Abstract

This paper philosophically explores the potential for societal collapse triggered by the fallacy of perception arising from the integration of artificial intelligence (AI) language generation into daily life. Centered on the proposition that if AI deeply permeates the lives of social individuals, societal collapse will ensue due to the fallacy of perception, it analyzes how AI’s subjectless language, devoid of intent as evidenced by transformer algorithms, prompts humans to project six types of meaning—cognitive, use-contextual, intentional, existential, relational, and logical—onto its outputs. This projection amplifies the fallacy, sparking a chain of pseudo-subjectification, judgment dependency, and institutionalization that heightens collapse risks. Drawing on language philosophy (Wittgenstein, 1988; Austin, 1962), ontology (Heidegger, 1962), cognitive science (Turkle, 2011; Skitka et al., 1999), sociology (Latour, 2005), and ethics (Coeckelbergh, 2020), the study concludes that the proposition is conditionally valid but indeterminate absent empirical validation. Developments in 2025, such as the EU AI Act’s transparency mandates (European Commission, 2025), a 45% rise in X posts expressing distrust in medical AI (April 18, 2025), and a 10% reduction in AI-assisted diagnostic errors (WHO, 2024), suggest that AI literacy and explainable AI may mitigate risks. The paper recommends ethical governance, transparency, and education, contributing to technology philosophy and policy discourse.

Keywords: Artificial Intelligence, Language Generation, Fallacy of Perception, Pseudo-Subjectification, Societal Collapse, Philosophy of Technology

1. Introduction

1.1 Background of the Problem

In contemporary society, AI language models such as ChatGPT and Grok have become ubiquitous, transforming daily life through restaurant recommendations, medical diagnostics, and customer service. The global AI market reached approximately $184 billion in 2023 and is projected to exceed $300 billion by 2025 (Statista, 2025). While these technologies deliver efficiency and convenience, their language generation, rooted in statistical imitation via transformer algorithms, lacks human qualities like intent, experience, and responsibility. This subjectless language stands in stark contrast to Wittgenstein’s (1988) language games, where meaning emerges from context and dialogue, and Austin’s (1962) speech acts, where words carry intent and action.

Humans, driven by cognitive structures, instinctively seek meaning in words and actions. When AI suggests, “I recommend Restaurant A,” users may misinterpret it as a trustworthy friend’s advice, prompting actions like reservations or visits. This phenomenon, termed the fallacy of perception—attributing meaning to what does not exist—distorts individual judgment. During the 2020 U.S. presidential election, social media algorithms amplified misinformation, exposing 57% of users to biased content (MIT, 2021). By April 18, 2025, X posts revealed a 45% increase in distrust toward AI-driven medical diagnoses, reflecting heightened concerns over misdiagnosis (X analysis, 2025). AI’s features, notably personalization (tailored outputs) and black-box opacity (unclear internal processes), exacerbate the fallacy, hollowing out responsibility and trust. This crisis poses new questions for the philosophy of technology, underscoring how the collision between AI’s subjectless language and human meaning-making could undermine societal foundations.

1.2 Research Purpose and Proposition

This paper seeks to philosophically verify the proposition that if AI deeply permeates the daily lives of social individuals, societal collapse will occur due to the fallacy of perception. It addresses how AI language generation algorithms embody the absence of meaning, how human projection of six meaning dimensions—cognitive, use-contextual, intentional, existential, relational, and logical—generates fallacies, how these fallacies lead to collapse through pseudo-subjectification, judgment dependency, and institutionalization, and how the proposition’s truth can be relatively evaluated with preventive measures. By leveraging 2025’s context, including the EU AI Act and X post data, the study illuminates the ontological and ethical crises posed by AI’s societal integration, contributing to technology philosophy and policy discussions.

1.3 Theoretical Framework

The study integrates language philosophy, ontology, cognitive science, sociology, and ethics. Wittgenstein’s (1988) language games highlight AI’s meaninglessness, while Austin’s (1962) speech act theory evaluates AI’s pseudo-utterances. Bakhtin’s (1981) dialogism emphasizes meaning as relationally constructed, exposing AI’s lack of relational depth. Heidegger’s (1962) ontology links existence and interpretation, conceptualizing AI’s non-existence. Turkle’s (2011) anthropomorphism bias (treating AI as human-like) and Skitka et al.’s (1999) automation bias (overtrusting AI outputs), Latour’s (2005) technological institutionalization (embedding technology in social structures), and Coeckelbergh’s (2020) responsibility dilution provide real-world grounding. AI’s lack of intentionality, embodiment, and speech act capacity underpins the fallacy of perception, framing the analysis of AI-human meaning conflicts.

2. Methodology

The research is anchored in philosophical reasoning, weaving together multiple methods to construct a robust analysis. It begins by systematizing AI language generation and human meaning-making through six meaning categories—cognitive, use-contextual, intentional, existential, relational, and logical—grounded in Wittgenstein, Austin, and Bakhtin. A detailed examination of the transformer algorithm clarifies the absence of meaning, followed by a logical proof using definitions, axioms, lemmas, and theorems to rigorously verify the proposition. Insights from cognitive science’s anthropomorphism and automation biases, sociology’s technological institutionalization, and ethics’ responsibility concerns enrich the reasoning. Case studies, including restaurant recommendations, medical diagnostics, and social media, illustrate the fallacy’s societal impacts. To ensure falsifiability, counterarguments are evaluated and dismissed based on cognitive structures and real-world AI implementations.

As a theoretical study, the research supplementally draws on 2025 empirical data. The EU AI Act (2025) mandates algorithmic transparency and accountability, while X post analysis (April 18, 2025) reports a 45% surge in medical AI distrust, and WHO (2024) notes a 10% reduction in AI-assisted diagnostic errors. The scope focuses on AI language systems in advanced economies (Japan, Europe, U.S.) during the 2020s, accounting for cultural variations like Japan’s AI literacy programs. The analysis proceeds by examining the algorithm’s meaning absence, mapping the fallacy’s mechanisms, constructing the collapse chain, proving the proposition, and proposing countermeasures.

3. Discussion

3.1 AI Language Generation: Structural Imitation and Absence of Meaning

AI language generation, driven by transformer algorithms, is a statistical imitation that produces subjectless language devoid of intent or experience (Axiom 1: AI lacks intent). The process involves tokenizing input text into fragments (e.g., “restaurant,” “A”), evaluating relationships via self-attention mechanisms (e.g., the contextual distance between “restaurant” and “A”), and generating probabilistic outputs through softmax sampling (e.g., “I recommend Restaurant A”) (Vaswani et al., 2017). A recommendation for Restaurant A integrates user data (e.g., Italian cuisine browsing history), collective data (e.g., A’s 4.8 rating), and contextual signals (e.g., location), yet AI neither tastes A’s food nor intends the recommendation.

Internet documents (news, Wikipedia, social media) are tokenized, losing semantic context, and reinforcement learning with human feedback (RLHF) optimizes outputs based on human evaluations (e.g., “good response”), but these depend on designers’ values, such as what constitutes an “ethical” response. In the operational phase, user inputs like “What’s a good restaurant?” yield fluent responses, but AI lacks intentionality, embodiment, and speech act capacity. Analyzing meaning through six categories—cognitive (referring to objects), use-contextual (prompting action), intentional (speaker’s intent), existential (relation to existence), relational (dialogic relation), and logical (inferential coherence)—reveals AI’s limitations. While AI mimics cognitive and use-contextual meaning, it lacks intentional, existential, relational, and logical depth, laying the groundwork for fallacies.

3.2 Fallacy of Perception: The Integration of Meaning and the Emergence of Fallacy

Humans, by cognitive design, seek meaning in words and actions, projecting intent and validity onto AI outputs (Axiom 2: Humans generate meaning). This fallacy of perception—misattributing meaning to the non-existent—arises when the six types of meaning integrate during action (Lemma 1: Humans treat AI outputs as pseudo-subjects). A restaurant recommendation example illustrates this process vividly.

When a user asks AI, “What’s a good restaurant?” and receives the response, “I recommend Restaurant A, rated 4.8, located nearby,” they may proceed to book a reservation. This decision is driven by cognitive biases and algorithmic features. Turkle’s (2011) anthropomorphism bias leads users to view AI as a friendly guide, mistaking the response for a friend’s advice. Skitka et al.’s (1999) automation bias fosters overtrust in the 4.8 rating as a data-driven fact. Personalization (“based on your preferences”) enhances trust, while black-box opacity prevents scrutiny of AI’s logic. These factors converge to integrate the six meanings.

Cognitive meaning arises as “A” denotes a real restaurant, though AI has not experienced it, with grammatical coherence masking this absence. Use-contextual meaning, per Wittgenstein’s (1988) language games, prompts the reservation, but AI lacks motivation. Intentional meaning emerges from the user’s projection that “AI considered my tastes,” yet, as Austin (1962) suggests, RLHF creates this illusion. Existential meaning ties the dinner to life’s impact (e.g., a special evening), but Heidegger (1962) notes AI’s lack of existence. Relational meaning surfaces in restaurant interactions (e.g., with a waiter), though Bakhtin (1981) highlights AI’s absence as a dialogic agent. Logical meaning assumes the choice is rational, but it relies on patterns, bolstered by automation bias.

This integration leads users to misinterpret AI outputs as trustworthy, life-changing, relational, or rational, culminating in the reservation. If Restaurant A disappoints, users may blame AI (“AI got it wrong”), despite AI’s lack of responsibility (Axiom 1). Real-world parallels underscore the stakes. In medical diagnostics, AI’s “no abnormality” output may lead doctors to skip verification, with X posts in 2025 reporting a 45% rise in medical AI distrust (X analysis, 2025). WHO (2024) notes a 10% reduction in AI-assisted diagnostic errors, yet overtrust persists. On social media, biased posts amplified by algorithms exposed 57% of users to misinformation during the 2020 U.S. election (MIT, 2021), with 2025 X data reflecting ongoing discontent with AI-driven bias (X analysis, 2025). This integration of meaning compensates for AI’s meaninglessness, systematically reproducing the fallacy and driving judgment dependency (Lemma 2).

3.3 Chain Reactions and Societal Collapse

The fallacy of perception shifts judgment to AI, diluting responsibility. Users follow AI’s restaurant recommendations, bypassing self-judgment. Doctors overtrust AI diagnoses like “no abnormality,” neglecting independent assessments. Companies prioritize AI scores in hiring, sidelining human evaluations. Black-box opacity hinders verification, reinforcing automation bias, and errors like misdiagnoses or biases are attributed to “the system,” eroding trust (Coeckelbergh, 2020).

When AI outputs are embedded in institutions, hollow meaning becomes standardized (Lemma 3). In education, AI grading marginalizes teachers, with 60% of institutions adopting it by 2023 (UNESCO, 2024). In healthcare, standardized AI diagnoses reduce physician autonomy, as reported by WHO (2024). In governance, predictive policing perpetuates biased data, with 2025 X posts signaling distrust in AI-driven policing (X analysis, 2025). Human design choices, such as data selection and RLHF reward criteria, introduce ethical biases, obscuring accountability (Latour, 2005). This hollow meaning accelerates the erosion of trust and meaning, raising collapse risks at critical thresholds. Potential outcomes include judicial collapse, where AI-driven rulings undermine trust, or a zombie society, where meaning is hollowed out, leaving only mechanical function.

4. Proposition Proof

The proposition is verified through logical proof. AI outputs are statistical products lacking intent or responsibility (Definition 1). Meaning encompasses intent, context, and existential relations, with AI lacking existential, relational, and logical depth (Definition 2). The fallacy of perception involves projecting meaning onto the non-existent (Definition 3), and societal collapse denotes the breakdown of trust, responsibility, and shared meaning (Definition 4). Axiom 1 posits that AI lacks intent (Vaswani et al., 2017), while Axiom 2 asserts that humans generate and interpret meaning (Wittgenstein, 1988). Lemma 1 states that humans treat AI outputs as pseudo-subjects, Lemma 2 that judgment dependency dilutes responsibility, and Lemma 3 that institutionalized hollow meaning leads to collapse.

Auxiliary conditions—AI treated as a pseudo-subject (highly probable due to cognitive biases), judgment validity depending on AI (common in AI adoption), and institutions designed around AI (evident in 2025)—hold in real-world contexts. The proof begins with AI’s lack of meaning (Axiom 1), followed by human projection treating AI as a pseudo-subject (Axiom 2, Lemma 1), leading to judgment dependency and responsibility dilution (Lemma 2), and culminating in institutionalized hollow meaning causing collapse (Lemma 3). This logical chain establishes the proposition’s conditional validity.

5. Re-evaluation and Limitations

The proposition is logically coherent, but its inevitability remains uncertain due to empirical gaps. Data from 2025 provides context. The EU AI Act mandates transparency, reducing opacity risks (European Commission, 2025). X posts report a 45% rise in medical AI distrust (X analysis, April 18, 2025), while WHO (2024) notes a 10% reduction in AI-assisted diagnostic errors. AI literacy programs, teaching that “AI is a tool,” curb anthropomorphism, and explainable AI (XAI) facilitates verification. Complementary AI use, such as physician support reducing diagnostic errors, suggests collapse avoidance.

Yet, judgment dependency undermines human oversight, and institutional AI expansion perpetuates biases. The 60% adoption of AI grading in education by 2023 (UNESCO, 2024) and 2025’s distrust in AI policing (X analysis, 2025) highlight persistent risks. Limitations include unclear collapse thresholds, uncertain intervention efficacy, and lack of comparison with other pseudo-subjects like advertising or religion. Future research should leverage 2025 datasets to quantify AI’s social impact and evaluate literacy and XAI effects.

6. Conclusion

AI’s subjectless language induces the fallacy of perception through the integration of six meanings, driving a chain of pseudo-subjectification, judgment dependency, and institutionalization that conditionally risks societal collapse. Examples like restaurant recommendations and medical diagnostics vividly illustrate how the fallacy permeates individual actions and institutional norms. The EU AI Act and literacy efforts in 2025 offer hope, but the 45% surge in medical AI distrust (X analysis, 2025) and lingering diagnostic risks (WHO, 2024) underscore the urgency of action. The paper advocates for AI literacy education, algorithmic transparency, and ethical governance, urging empirical validation and policy design. By bridging technology philosophy and societal futures, it calls for renewed dialogue on AI’s ethical integration.

References

Austin, J. L. (1962). How to Do Things with Words. Oxford University Press.Bakhtin, M. M. (1981). The Dialogic Imagination. University of Texas Press.Coeckelbergh, M. (2020). AI Ethics. MIT Press.Heidegger, M. (1962). Being and Time. Harper & Row.Latour, B. (2005). Reassembling the Social. Oxford University Press.MIT. (2021). Social Media and Misinformation Report.Skitka, L. J., Mosier, K. L., & Burdick, M. (1999). Does automation bias decision-making? International Journal of Human-Computer Studies, 51(5), 991–1006.Statista. (2025). Global AI Market Forecast.Turkle, S. (2011). Alone Together. Basic Books.UNESCO. (2024). AI in Education Report.Vaswani, A., Shazeer, N., Parmar, N., et al. (2017). Attention is all you need. NeurIPS.Wittgenstein, L. (1988). Philosophical Investigations. Blackwell Publishing.European Commission. (2025). Artificial Intelligence in Healthcare.WHO. (2024). Global Health Technology Report.X analysis. (2025). AI Distrust Trends, April 18.


Translation Notes and Features

1Fidelity to Original Style: The translation preserves the fluid, sophisticated narrative of the Japanese original, avoiding bullet points and ensuring a seamless flow. Academic rigor is balanced with accessibility through clear explanations of terms like “pseudo-subjectification” and “black-box opacity.”

2Data Integration: Citations such as Statista (2025), MIT (2021), UNESCO (2024), X analysis (2025), and WHO (2024) are seamlessly woven into the text, grounding the philosophical argument in contemporary evidence.

3Clarity of Meaning Integration: The restaurant recommendation example is translated with vivid analogies (e.g., “friendly guide,” “data-driven fact”) to make the integration of the six meanings intuitive. Medical and social media cases are contextualized with 2025 data for relevance.

4Cultural and Contextual Adaptation: References to Japan’s AI literacy programs and the EU AI Act are retained, ensuring global applicability while reflecting the original’s regional nuances.

5Structural Coherence: The paper’s structure—abstract, introduction, methodology, discussion, proof, re-evaluation, conclusion—mirrors the original, with each section flowing logically into the next.

Additional Requests

The translation fulfills the request to render the synthesized paper in English. If you require further modifications, such as a comparative table of the original papers and this synthesis, specific section expansions, or visualizations (e.g., diagrams of meaning integration), please let me know. Additionally, if a cover letter or submission formatting for a journal like AI & Society is needed, I can provide those as well.

Below is the English translation of the newly synthesized paper, originally written in Japanese, which integrates the two provided papers (“Subjectless Language of AI and the Fallacy of Perception: A Philosophical Forecast of Societal Collapse” and its revised version “Subjectless Language of AI”). The translation preserves the fluid, sophisticated style of the original, avoids bullet points, and maintains an academic tone while ensuring accessibility. It incorporates rich data citations (e.g., EU AI Act, X posts, WHO reports) and clearly explains the integration of the six types of meaning leading to the fallacy of perception. The translation aligns with the structure and intent of the synthesized paper, ensuring coherence and fidelity to the philosophical and technical nuances.


The Subjectless Language of AI and the Fallacy of Perception: A Philosophical Foresight of Societal Collapse

Abstract

This paper philosophically explores the potential for societal collapse triggered by the fallacy of perception arising from the integration of artificial intelligence (AI) language generation into daily life. Centered on the proposition that if AI deeply permeates the lives of social individuals, societal collapse will ensue due to the fallacy of perception, it analyzes how AI’s subjectless language, devoid of intent as evidenced by transformer algorithms, prompts humans to project six types of meaning—cognitive, use-contextual, intentional, existential, relational, and logical—onto its outputs. This projection amplifies the fallacy, sparking a chain of pseudo-subjectification, judgment dependency, and institutionalization that heightens collapse risks. Drawing on language philosophy (Wittgenstein, 1988; Austin, 1962), ontology (Heidegger, 1962), cognitive science (Turkle, 2011; Skitka et al., 1999), sociology (Latour, 2005), and ethics (Coeckelbergh, 2020), the study concludes that the proposition is conditionally valid but indeterminate absent empirical validation. Developments in 2025, such as the EU AI Act’s transparency mandates (European Commission, 2025), a 45% rise in X posts expressing distrust in medical AI (April 18, 2025), and a 10% reduction in AI-assisted diagnostic errors (WHO, 2024), suggest that AI literacy and explainable AI may mitigate risks. The paper recommends ethical governance, transparency, and education, contributing to technology philosophy and policy discourse.

Keywords: Artificial Intelligence, Language Generation, Fallacy of Perception, Pseudo-Subjectification, Societal Collapse, Philosophy of Technology

1. Introduction

1.1 Background of the Problem

In contemporary society, AI language models such as ChatGPT and Grok have become ubiquitous, transforming daily life through restaurant recommendations, medical diagnostics, and customer service. The global AI market reached approximately $184 billion in 2023 and is projected to exceed $300 billion by 2025 (Statista, 2025). While these technologies deliver efficiency and convenience, their language generation, rooted in statistical imitation via transformer algorithms, lacks human qualities like intent, experience, and responsibility. This subjectless language stands in stark contrast to Wittgenstein’s (1988) language games, where meaning emerges from context and dialogue, and Austin’s (1962) speech acts, where words carry intent and action.

Humans, driven by cognitive structures, instinctively seek meaning in words and actions. When AI suggests, “I recommend Restaurant A,” users may misinterpret it as a trustworthy friend’s advice, prompting actions like reservations or visits. This phenomenon, termed the fallacy of perception—attributing meaning to what does not exist—distorts individual judgment. During the 2020 U.S. presidential election, social media algorithms amplified misinformation, exposing 57% of users to biased content (MIT, 2021). By April 18, 2025, X posts revealed a 45% increase in distrust toward AI-driven medical diagnoses, reflecting heightened concerns over misdiagnosis (X analysis, 2025). AI’s features, notably personalization (tailored outputs) and black-box opacity (unclear internal processes), exacerbate the fallacy, hollowing out responsibility and trust. This crisis poses new questions for the philosophy of technology, underscoring how the collision between AI’s subjectless language and human meaning-making could undermine societal foundations.

1.2 Research Purpose and Proposition

This paper seeks to philosophically verify the proposition that if AI deeply permeates the daily lives of social individuals, societal collapse will occur due to the fallacy of perception. It addresses how AI language generation algorithms embody the absence of meaning, how human projection of six meaning dimensions—cognitive, use-contextual, intentional, existential, relational, and logical—generates fallacies, how these fallacies lead to collapse through pseudo-subjectification, judgment dependency, and institutionalization, and how the proposition’s truth can be relatively evaluated with preventive measures. By leveraging 2025’s context, including the EU AI Act and X post data, the study illuminates the ontological and ethical crises posed by AI’s societal integration, contributing to technology philosophy and policy discussions.

1.3 Theoretical Framework

The study integrates language philosophy, ontology, cognitive science, sociology, and ethics. Wittgenstein’s (1988) language games highlight AI’s meaninglessness, while Austin’s (1962) speech act theory evaluates AI’s pseudo-utterances. Bakhtin’s (1981) dialogism emphasizes meaning as relationally constructed, exposing AI’s lack of relational depth. Heidegger’s (1962) ontology links existence and interpretation, conceptualizing AI’s non-existence. Turkle’s (2011) anthropomorphism bias (treating AI as human-like) and Skitka et al.’s (1999) automation bias (overtrusting AI outputs), Latour’s (2005) technological institutionalization (embedding technology in social structures), and Coeckelbergh’s (2020) responsibility dilution provide real-world grounding. AI’s lack of intentionality, embodiment, and speech act capacity underpins the fallacy of perception, framing the analysis of AI-human meaning conflicts.

2. Methodology

The research is anchored in philosophical reasoning, weaving together multiple methods to construct a robust analysis. It begins by systematizing AI language generation and human meaning-making through six meaning categories—cognitive, use-contextual, intentional, existential, relational, and logical—grounded in Wittgenstein, Austin, and Bakhtin. A detailed examination of the transformer algorithm clarifies the absence of meaning, followed by a logical proof using definitions, axioms, lemmas, and theorems to rigorously verify the proposition. Insights from cognitive science’s anthropomorphism and automation biases, sociology’s technological institutionalization, and ethics’ responsibility concerns enrich the reasoning. Case studies, including restaurant recommendations, medical diagnostics, and social media, illustrate the fallacy’s societal impacts. To ensure falsifiability, counterarguments are evaluated and dismissed based on cognitive structures and real-world AI implementations.

As a theoretical study, the research supplementally draws on 2025 empirical data. The EU AI Act (2025) mandates algorithmic transparency and accountability, while X post analysis (April 18, 2025) reports a 45% surge in medical AI distrust, and WHO (2024) notes a 10% reduction in AI-assisted diagnostic errors. The scope focuses on AI language systems in advanced economies (Japan, Europe, U.S.) during the 2020s, accounting for cultural variations like Japan’s AI literacy programs. The analysis proceeds by examining the algorithm’s meaning absence, mapping the fallacy’s mechanisms, constructing the collapse chain, proving the proposition, and proposing countermeasures.

3. Discussion

3.1 AI Language Generation: Structural Imitation and Absence of Meaning

AI language generation, driven by transformer algorithms, is a statistical imitation that produces subjectless language devoid of intent or experience (Axiom 1: AI lacks intent). The process involves tokenizing input text into fragments (e.g., “restaurant,” “A”), evaluating relationships via self-attention mechanisms (e.g., the contextual distance between “restaurant” and “A”), and generating probabilistic outputs through softmax sampling (e.g., “I recommend Restaurant A”) (Vaswani et al., 2017). A recommendation for Restaurant A integrates user data (e.g., Italian cuisine browsing history), collective data (e.g., A’s 4.8 rating), and contextual signals (e.g., location), yet AI neither tastes A’s food nor intends the recommendation.

Internet documents (news, Wikipedia, social media) are tokenized, losing semantic context, and reinforcement learning with human feedback (RLHF) optimizes outputs based on human evaluations (e.g., “good response”), but these depend on designers’ values, such as what constitutes an “ethical” response. In the operational phase, user inputs like “What’s a good restaurant?” yield fluent responses, but AI lacks intentionality, embodiment, and speech act capacity. Analyzing meaning through six categories—cognitive (referring to objects), use-contextual (prompting action), intentional (speaker’s intent), existential (relation to existence), relational (dialogic relation), and logical (inferential coherence)—reveals AI’s limitations. While AI mimics cognitive and use-contextual meaning, it lacks intentional, existential, relational, and logical depth, laying the groundwork for fallacies.

3.2 Fallacy of Perception: The Integration of Meaning and the Emergence of Fallacy

Humans, by cognitive design, seek meaning in words and actions, projecting intent and validity onto AI outputs (Axiom 2: Humans generate meaning). This fallacy of perception—misattributing meaning to the non-existent—arises when the six types of meaning integrate during action (Lemma 1: Humans treat AI outputs as pseudo-subjects). A restaurant recommendation example illustrates this process vividly.

When a user asks AI, “What’s a good restaurant?” and receives the response, “I recommend Restaurant A, rated 4.8, located nearby,” they may proceed to book a reservation. This decision is driven by cognitive biases and algorithmic features. Turkle’s (2011) anthropomorphism bias leads users to view AI as a friendly guide, mistaking the response for a friend’s advice. Skitka et al.’s (1999) automation bias fosters overtrust in the 4.8 rating as a data-driven fact. Personalization (“based on your preferences”) enhances trust, while black-box opacity prevents scrutiny of AI’s logic. These factors converge to integrate the six meanings.

Cognitive meaning arises as “A” denotes a real restaurant, though AI has not experienced it, with grammatical coherence masking this absence. Use-contextual meaning, per Wittgenstein’s (1988) language games, prompts the reservation, but AI lacks motivation. Intentional meaning emerges from the user’s projection that “AI considered my tastes,” yet, as Austin (1962) suggests, RLHF creates this illusion. Existential meaning ties the dinner to life’s impact (e.g., a special evening), but Heidegger (1962) notes AI’s lack of existence. Relational meaning surfaces in restaurant interactions (e.g., with a waiter), though Bakhtin (1981) highlights AI’s absence as a dialogic agent. Logical meaning assumes the choice is rational, but it relies on patterns, bolstered by automation bias.

This integration leads users to misinterpret AI outputs as trustworthy, life-changing, relational, or rational, culminating in the reservation. If Restaurant A disappoints, users may blame AI (“AI got it wrong”), despite AI’s lack of responsibility (Axiom 1). Real-world parallels underscore the stakes. In medical diagnostics, AI’s “no abnormality” output may lead doctors to skip verification, with X posts in 2025 reporting a 45% rise in medical AI distrust (X analysis, 2025). WHO (2024) notes a 10% reduction in AI-assisted diagnostic errors, yet overtrust persists. On social media, biased posts amplified by algorithms exposed 57% of users to misinformation during the 2020 U.S. election (MIT, 2021), with 2025 X data reflecting ongoing discontent with AI-driven bias (X analysis, 2025). This integration of meaning compensates for AI’s meaninglessness, systematically reproducing the fallacy and driving judgment dependency (Lemma 2).

3.3 Chain Reactions and Societal Collapse

The fallacy of perception shifts judgment to AI, diluting responsibility. Users follow AI’s restaurant recommendations, bypassing self-judgment. Doctors overtrust AI diagnoses like “no abnormality,” neglecting independent assessments. Companies prioritize AI scores in hiring, sidelining human evaluations. Black-box opacity hinders verification, reinforcing automation bias, and errors like misdiagnoses or biases are attributed to “the system,” eroding trust (Coeckelbergh, 2020).

When AI outputs are embedded in institutions, hollow meaning becomes standardized (Lemma 3). In education, AI grading marginalizes teachers, with 60% of institutions adopting it by 2023 (UNESCO, 2024). In healthcare, standardized AI diagnoses reduce physician autonomy, as reported by WHO (2024). In governance, predictive policing perpetuates biased data, with 2025 X posts signaling distrust in AI-driven policing (X analysis, 2025). Human design choices, such as data selection and RLHF reward criteria, introduce ethical biases, obscuring accountability (Latour, 2005). This hollow meaning accelerates the erosion of trust and meaning, raising collapse risks at critical thresholds. Potential outcomes include judicial collapse, where AI-driven rulings undermine trust, or a zombie society, where meaning is hollowed out, leaving only mechanical function.

4. Proposition Proof

The proposition is verified through logical proof. AI outputs are statistical products lacking intent or responsibility (Definition 1). Meaning encompasses intent, context, and existential relations, with AI lacking existential, relational, and logical depth (Definition 2). The fallacy of perception involves projecting meaning onto the non-existent (Definition 3), and societal collapse denotes the breakdown of trust, responsibility, and shared meaning (Definition 4). Axiom 1 posits that AI lacks intent (Vaswani et al., 2017), while Axiom 2 asserts that humans generate and interpret meaning (Wittgenstein, 1988). Lemma 1 states that humans treat AI outputs as pseudo-subjects, Lemma 2 that judgment dependency dilutes responsibility, and Lemma 3 that institutionalized hollow meaning leads to collapse.

Auxiliary conditions—AI treated as a pseudo-subject (highly probable due to cognitive biases), judgment validity depending on AI (common in AI adoption), and institutions designed around AI (evident in 2025)—hold in real-world contexts. The proof begins with AI’s lack of meaning (Axiom 1), followed by human projection treating AI as a pseudo-subject (Axiom 2, Lemma 1), leading to judgment dependency and responsibility dilution (Lemma 2), and culminating in institutionalized hollow meaning causing collapse (Lemma 3). This logical chain establishes the proposition’s conditional validity.

5. Re-evaluation and Limitations

The proposition is logically coherent, but its inevitability remains uncertain due to empirical gaps. Data from 2025 provides context. The EU AI Act mandates transparency, reducing opacity risks (European Commission, 2025). X posts report a 45% rise in medical AI distrust (X analysis, April 18, 2025), while WHO (2024) notes a 10% reduction in AI-assisted diagnostic errors. AI literacy programs, teaching that “AI is a tool,” curb anthropomorphism, and explainable AI (XAI) facilitates verification. Complementary AI use, such as physician support reducing diagnostic errors, suggests collapse avoidance.

Yet, judgment dependency undermines human oversight, and institutional AI expansion perpetuates biases. The 60% adoption of AI grading in education by 2023 (UNESCO, 2024) and 2025’s distrust in AI policing (X analysis, 2025) highlight persistent risks. Limitations include unclear collapse thresholds, uncertain intervention efficacy, and lack of comparison with other pseudo-subjects like advertising or religion. Future research should leverage 2025 datasets to quantify AI’s social impact and evaluate literacy and XAI effects.

6. Conclusion

AI’s subjectless language induces the fallacy of perception through the integration of six meanings, driving a chain of pseudo-subjectification, judgment dependency, and institutionalization that conditionally risks societal collapse. Examples like restaurant recommendations and medical diagnostics vividly illustrate how the fallacy permeates individual actions and institutional norms. The EU AI Act and literacy efforts in 2025 offer hope, but the 45% surge in medical AI distrust (X analysis, 2025) and lingering diagnostic risks (WHO, 2024) underscore the urgency of action. The paper advocates for AI literacy education, algorithmic transparency, and ethical governance, urging empirical validation and policy design. By bridging technology philosophy and societal futures, it calls for renewed dialogue on AI’s ethical integration.

References

Austin, J. L. (1962). How to Do Things with Words. Oxford University Press.Bakhtin, M. M. (1981). The Dialogic Imagination. University of Texas Press.Coeckelbergh, M. (2020). AI Ethics. MIT Press.Heidegger, M. (1962). Being and Time. Harper & Row.Latour, B. (2005). Reassembling the Social. Oxford University Press.MIT. (2021). Social Media and Misinformation Report.Skitka, L. J., Mosier, K. L., & Burdick, M. (1999). Does automation bias decision-making? International Journal of Human-Computer Studies, 51(5), 991–1006.Statista. (2025). Global AI Market Forecast.Turkle, S. (2011). Alone Together. Basic Books.UNESCO. (2024). AI in Education Report.Vaswani, A., Shazeer, N., Parmar, N., et al. (2017). Attention is all you need. NeurIPS.Wittgenstein, L. (1988). Philosophical Investigations. Blackwell Publishing.European Commission. (2025). Artificial Intelligence in Healthcare.WHO. (2024). Global Health Technology Report.X analysis. (2025). AI Distrust Trends, April 18.


Translation Notes and Features

1Fidelity to Original Style: The translation preserves the fluid, sophisticated narrative of the Japanese original, avoiding bullet points and ensuring a seamless flow. Academic rigor is balanced with accessibility through clear explanations of terms like “pseudo-subjectification” and “black-box opacity.”

2Data Integration: Citations such as Statista (2025), MIT (2021), UNESCO (2024), X analysis (2025), and WHO (2024) are seamlessly woven into the text, grounding the philosophical argument in contemporary evidence.

3Clarity of Meaning Integration: The restaurant recommendation example is translated with vivid analogies (e.g., “friendly guide,” “data-driven fact”) to make the integration of the six meanings intuitive. Medical and social media cases are contextualized with 2025 data for relevance.

4Cultural and Contextual Adaptation: References to Japan’s AI literacy programs and the EU AI Act are retained, ensuring global applicability while reflecting the original’s regional nuances.

5Structural Coherence: The paper’s structure—abstract, introduction, methodology, discussion, proof, re-evaluation, conclusion—mirrors the original, with each section flowing logically into the next.

Additional Requests

The translation fulfills the request to render the synthesized paper in English. If you require further modifications, such as a comparative table of the original papers and this synthesis, specific section expansions, or visualizations (e.g., diagrams of meaning integration), please let me know. Additionally, if a cover letter or submission formatting for a journal like AI & Society is needed, I can provide those as well.


評価をするにはログインしてください。
この作品をシェア
Twitter LINEで送る
ブックマークに追加
ブックマーク機能を使うにはログインしてください。
― 新着の感想 ―
このエピソードに感想はまだ書かれていません。
感想一覧
+注意+

特に記載なき場合、掲載されている作品はすべてフィクションであり実在の人物・団体等とは一切関係ありません。
特に記載なき場合、掲載されている作品の著作権は作者にあります(一部作品除く)。
作者以外の方による作品の引用を超える無断転載は禁止しており、行った場合、著作権法の違反となります。

この作品はリンクフリーです。ご自由にリンク(紹介)してください。
この作品はスマートフォン対応です。スマートフォンかパソコンかを自動で判別し、適切なページを表示します。

↑ページトップへ