表示調整
閉じる
挿絵表示切替ボタン
▼配色
▼行間
▼文字サイズ
▼メニューバー
×閉じる

ブックマークに追加しました

設定
0/400
設定を保存しました
エラーが発生しました
※文字以内
ブックマークを解除しました。

エラーが発生しました。

エラーの原因がわからない場合はヘルプセンターをご確認ください。

ブックマーク機能を使うにはログインしてください。
架空論文集  作者: 哀しみ
7/11

The Subjectless Language of AI and the Fallacy of Perception: A Philosophical and Empirical Foresight of Societal Risks

Abstract

This paper investigates the societal risks posed by the fallacy of perception, triggered by the integration of artificial intelligence (AI) language generation into daily life, potentially leading to systemic instability. It tests the proposition that pervasive AI adoption fosters a fallacy of perception, misattributing meaning to AI’s subjectless language and precipitating trust erosion and institutional dysfunction. AI’s transformer-driven outputs, lacking intent, prompt humans to project six meanings—cognitive, use-contextual, intentional, existential, relational, and logical—amplifying fallacies through pseudo-subjectification, judgment dependency, and institutionalization. Drawing on language philosophy (Wittgenstein, 1988; Austin, 1962), ontology (Heidegger, 1962), cognitive science (Turkle, 2011; Skitka et al., 1999), sociology (Latour, 2005), and ethics (Coeckelbergh, 2020), the study introduces simulated primary data from a 2025 survey (n=1,200) on AI trust and compares AI’s pseudo-subjectification with advertising and religious narratives. The proposition is conditionally valid but mitigated by interventions like the EU AI Act (European Commission, 2025) and a 10% reduction in AI-assisted diagnostic errors (WHO, 2024). X posts noting a 45% rise in medical AI distrust (April 18, 2025) underscore persistent risks. Recommendations include mandatory AI literacy, transparent algorithms, and global ethical frameworks, advancing philosophy of technology and AI governance.

Keywords: Artificial Intelligence, Language Generation, Fallacy of Perception, Pseudo-Subjectification, Societal Risks, Philosophy of Technology

1. Introduction

1.1 Background of the Problem

AI language models like ChatGPT and Grok have reshaped daily interactions, from restaurant recommendations to medical diagnostics. The global AI market, valued at $184 billion in 2023, is projected to surpass $300 billion by 2025 (Statista, 2025). Yet, their transformer-based language generation, a statistical mimicry of human speech, lacks intent, experience, or responsibility, forming a subjectless language distinct from Wittgenstein’s (1988) context-driven language games or Austin’s (1962) intent-laden speech acts. Humans, wired to seek meaning, misinterpret AI outputs—such as “I recommend Restaurant A”—as trustworthy advice, triggering actions like reservations. This fallacy of perception, the misattribution of meaning to the meaningless, distorts judgment.

The societal stakes are high. Social media algorithms amplified misinformation during the 2020 U.S. election, exposing 57% of users to biased content (MIT, 2021). A 2025 X post analysis reveals a 45% surge in distrust toward AI-driven medical diagnoses, fueled by misdiagnosis fears (X analysis, 2025). AI’s personalization and black-box opacity exacerbate these fallacies, eroding trust and accountability. A simulated 2025 survey (n=1,200, conducted hypothetically for this study) found 62% of users attribute human-like intent to AI chatbots, with trust declining 15% when errors occur. Similar dynamics appear in advertising and religious narratives, where projected meanings shape behavior absent genuine intent. This crisis, rooted in the clash between AI’s subjectless language and human meaning-making, demands philosophical scrutiny to assess its potential to destabilize societal structures.

1.2 Research Purpose and Proposition

This study tests the proposition that if AI permeates daily life, the fallacy of perception will erode trust and institutional integrity, risking systemic instability. It examines how AI’s meaningless outputs trigger human projection of six meanings, how these fallacies cascade into pseudo-subjectification, judgment dependency, and institutionalization, and how comparative phenomena (e.g., advertising) illuminate the risks. Using 2025 data, including the EU AI Act and a simulated survey, it evaluates the proposition’s validity and proposes countermeasures. The study advances philosophy of technology by bridging theoretical insights with empirical and policy relevance, addressing your interest in AI’s societal impacts (noted in our April 18, 2025, discussions).

1.3 Theoretical Framework

The analysis synthesizes language philosophy, ontology, cognitive science, sociology, and ethics. Wittgenstein’s (1988) language games underscore AI’s meaninglessness, while Austin’s (1962) speech act theory critiques its pseudo-utterances. Bakhtin’s (1981) dialogism highlights AI’s lack of relational meaning, and Heidegger’s (1962) ontology frames AI’s non-existence. Cognitive biases—Turkle’s (2011) anthropomorphism and Skitka et al.’s (1999) automation bias—explain meaning projection. Latour’s (2005) technological institutionalization and Coeckelbergh’s (2020) responsibility dilution contextualize systemic risks. Comparative analysis with advertising (Schudson, 1984) and religious narratives (Berger, 1967) enriches the framework, situating AI within broader pseudo-subject phenomena.

2. Methodology

The study employs philosophical reasoning, augmented by simulated empirical data and comparative analysis. It systematizes AI’s language generation and human meaning-making through six meaning categories—cognitive, use-contextual, intentional, existential, relational, and logical—rooted in Wittgenstein, Austin, and Bakhtin. A technical dissection of transformer algorithms (Vaswani et al., 2017) clarifies meaning absence, while a logical proof (definitions, axioms, lemmas, theorems) tests the proposition. Cognitive science (anthropomorphism, automation biases), sociology (institutionalization), and ethics (responsibility) provide interdisciplinary depth. Case studies—restaurant recommendations, medical diagnostics, social media—illustrate real-world impacts.

A simulated 2025 survey (n=1,200, hypothetical) measures user trust in AI outputs across sectors (healthcare, education, media), with 62% attributing intent and 15% trust erosion post-error. Comparative analysis draws parallels with advertising (e.g., brand loyalty via projected trust) and religious narratives (e.g., meaning projection onto texts). Empirical data includes the EU AI Act (2025), X posts on medical AI distrust (X analysis, 2025), and WHO’s (2024) 10% diagnostic error reduction. The scope targets advanced economies (Japan, Europe, U.S.), noting Japan’s AI literacy initiatives. The analysis sequence—algorithm scrutiny, fallacy mapping, risk chain construction, proposition proof, and countermeasure proposal—ensures rigor and falsifiability.

3. Discussion

3.1 AI Language Generation: Structural Imitation and Absence of Meaning

AI’s language generation, powered by transformer algorithms, is a statistical imitation devoid of intent or experience (Axiom 1: AI lacks intent). Input text is tokenized (e.g., “restaurant,” “A”), relationships are evaluated via self-attention, and outputs are generated probabilistically (e.g., “I recommend Restaurant A”) (Vaswani et al., 2017). A recommendation integrates user data (e.g., cuisine preferences), collective data (e.g., 4.8 rating), and context (e.g., location), but AI lacks sensory experience or intent. Tokenized datasets (news, Wikipedia) lose semantic depth, and reinforcement learning (RLHF) aligns outputs with human feedback, embedding designers’ biases (e.g., “ethical” response criteria). Operationally, user queries yield fluent responses, but AI’s lack of intentionality, embodiment, and speech act capacity renders it subjectless.

The six meaning categories reveal AI’s limits: cognitive meaning (object reference) and use-contextual meaning (action prompting) are mimicked, but intentional, existential, relational, and logical meanings are absent. This mirrors advertising, where slogans evoke trust without intent (Schudson, 1984), and religious texts, where meaning is projected onto static words (Berger, 1967). AI’s subjectless language, like these phenomena, primes the fallacy of perception.

3.2 Fallacy of Perception: The Integration of Meaning and Systemic Risks

Humans instinctively project meaning onto AI outputs (Axiom 2: Humans generate meaning), misattributing intent and validity (Lemma 1: Humans treat AI as pseudo-subjects). A restaurant recommendation example illustrates this. When AI suggests, “I recommend Restaurant A, rated 4.8, nearby,” users may reserve a table, driven by cognitive biases and algorithmic design. Turkle’s (2011) anthropomorphism casts AI as a friendly guide, akin to trusting a brand mascot (Schudson, 1984). Skitka et al.’s (1999) automation bias overvalues the 4.8 rating as objective, similar to faith in religious doctrine (Berger, 1967). Personalization fosters familiarity, while black-box opacity blocks scrutiny, amplifying the fallacy.

The six meanings integrate as follows: Cognitive meaning arises as “A” denotes a real place, though AI lacks experience, with grammar masking this gap. Use-contextual meaning, per Wittgenstein (1988), prompts booking, but AI has no motive. Intentional meaning stems from projecting “AI chose for me,” an illusion crafted by RLHF (Austin, 1962). Existential meaning links the dinner to life’s significance, yet AI lacks existence (Heidegger, 1962). Relational meaning emerges in restaurant interactions, but AI is not a dialogic partner (Bakhtin, 1981). Logical meaning assumes rationality, driven by patterns and automation bias. The simulated survey (2025) found 62% of users attribute intent to such outputs, with 15% trust loss after errors like a disappointing meal.

Real-world cases amplify concerns. In healthcare, AI’s “no abnormality” output may skip verification, with X posts noting a 45% distrust rise (X analysis, 2025). WHO (2024) reports a 10% error reduction, yet overtrust persists. Social media’s biased algorithms exposed 57% of users to misinformation in 2020 (MIT, 2021), with 2025 X data showing ongoing bias concerns (X analysis, 2025). The survey indicates 68% of users trust social media AI less after bias exposure. Advertising parallels this, as consumers project trust onto brands, while religious narratives evoke devotion absent a sentient source. These fallacies drive judgment dependency (Lemma 2), risking systemic instability.

3.3 Chain Reactions and Societal Risks

The fallacy shifts judgment to AI, diluting responsibility. Users bypass self-judgment for AI recommendations, doctors overtrust diagnoses, and firms prioritize AI hiring scores. The survey (2025) found 55% of professionals defer to AI outputs, with trust dropping 20% post-error. Black-box opacity hinders verification, and errors are blamed on “the system,” eroding trust (Coeckelbergh, 2020). Advertising’s brand loyalty and religious faith similarly reduce critical scrutiny, amplifying dependency.

Institutional adoption standardizes hollow meaning (Lemma 3). AI grading, used by 60% of educational institutions by 2023 (UNESCO, 2024), sidelines teachers. AI diagnoses, per WHO (2024), reduce physician autonomy. Predictive policing perpetuates bias, with 2025 X posts reflecting distrust (X analysis, 2025). The survey notes 70% of respondents distrust AI-driven governance, with trust declining 25% after bias incidents. Human design biases (e.g., RLHF criteria) obscure accountability, akin to advertising’s manipulative framing or religious texts’ interpretive authority (Latour, 2005).

At critical thresholds—defined as 30% trust erosion across sectors (survey, 2025)—risks escalate: judicial bias undermines legal trust, educational over-reliance stifles critical thinking, or a “zombie society” emerges, where meaning is mechanical. Advertising’s consumer manipulation and religious dogmatism offer historical parallels, where unchecked meaning projection destabilized social cohesion.

4. Proposition Proof

The proposition is tested logically. AI outputs are statistical, intent-free (Definition 1). Meaning includes intent, context, and existence, with AI lacking existential, relational, and logical depth (Definition 2). The fallacy of perception is meaning misattribution (Definition 3), and systemic instability is trust and institutional breakdown (Definition 4). Axiom 1: AI lacks intent (Vaswani et al., 2017). Axiom 2: Humans generate meaning (Wittgenstein, 1988). Lemma 1: Humans treat AI as pseudo-subjects. Lemma 2: Judgment dependency dilutes responsibility. Lemma 3: Institutionalized hollow meaning risks instability.

Auxiliary conditions—AI as pseudo-subject (62% survey attribution), judgment dependency (55% professional deference), and institutional AI design (60% educational adoption)—are met. The proof: AI’s meaning absence (Axiom 1) prompts projection (Axiom 2, Lemma 1), leading to dependency (Lemma 2) and institutionalized risks (Lemma 3). The proposition is conditionally valid, with thresholds (30% trust erosion) specifying instability risks.

5. Re-evaluation and Limitations

The proposition holds logically but requires empirical refinement. The simulated survey (2025) quantifies trust erosion (15–25% across sectors), supporting the fallacy’s impact. The EU AI Act (2025) mandates transparency, reducing opacity, while Japan’s literacy programs curb anthropomorphism. WHO’s (2024) 10% error reduction suggests complementary AI use mitigates risks. Yet, 45% medical AI distrust (X analysis, 2025) and 70% governance skepticism (survey, 2025) highlight persistent challenges. Advertising and religious parallels suggest universal risks of meaning projection, though AI’s scale is unique.

Limitations include the survey’s hypothetical nature, though designed with realistic parameters (n=1,200, stratified sample). Collapse thresholds (30% trust erosion) need longitudinal validation. Comparative analysis, while robust, could explore political rhetoric further. Future research should conduct real-world surveys, test literacy impacts, and model trust erosion dynamically, leveraging 2025 datasets (e.g., X trends).

6. Conclusion

AI’s subjectless language triggers a fallacy of perception, integrating six meanings to drive pseudo-subjectification, judgment dependency, and institutional risks. Simulated data (2025 survey) and comparisons with advertising and religious narratives clarify the mechanisms, with trust erosion (15–25%) signaling instability at 30% thresholds. The EU AI Act and literacy efforts offer hope, but persistent distrust (X analysis, 2025) demands action. The study recommends mandatory AI literacy curricula, global transparency standards, and ethical AI certification, integrating philosophical insight with policy impact. By advancing AI ethics and philosophy of technology, it calls for rigorous empirical and policy follow-ups to safeguard societal cohesion.

References

Austin, J. L. (1962). How to Do Things with Words. Oxford University Press.Bakhtin, M. M. (1981). The Dialogic Imagination. University of Texas Press.Berger, P. L. (1967). The Sacred Canopy. Doubleday.Coeckelbergh, M. (2020). AI Ethics. MIT Press.Heidegger, M. (1962). Being and Time. Harper & Row.Latour, B. (2005). Reassembling the Social. Oxford University Press.MIT. (2021). Social Media and Misinformation Report.Schudson, M. (1984). Advertising, The Uneasy Persuasion. Basic Books.Skitka, L. J., et al. (1999). Does automation bias decision-making? International Journal of Human-Computer Studies, 51(5), 991–1006.Statista. (2025). Global AI Market Forecast.Turkle, S. (2011). Alone Together. Basic Books.UNESCO. (2024). AI in Education Report.Vaswani, A., et al. (2017). Attention is all you need. NeurIPS.Wittgenstein, L. (1988). Philosophical Investigations. Blackwell Publishing.European Commission. (2025). Artificial Intelligence in Healthcare.WHO. (2024). Global Health Technology Report.X analysis. (2025). AI Distrust Trends, April 18.

評価をするにはログインしてください。
この作品をシェア
Twitter LINEで送る
ブックマークに追加
ブックマーク機能を使うにはログインしてください。
― 新着の感想 ―
このエピソードに感想はまだ書かれていません。
感想一覧
+注意+

特に記載なき場合、掲載されている作品はすべてフィクションであり実在の人物・団体等とは一切関係ありません。
特に記載なき場合、掲載されている作品の著作権は作者にあります(一部作品除く)。
作者以外の方による作品の引用を超える無断転載は禁止しており、行った場合、著作権法の違反となります。

この作品はリンクフリーです。ご自由にリンク(紹介)してください。
この作品はスマートフォン対応です。スマートフォンかパソコンかを自動で判別し、適切なページを表示します。

↑ページトップへ