表示調整
閉じる
挿絵表示切替ボタン
▼配色
▼行間
▼文字サイズ
▼メニューバー
×閉じる

ブックマークに追加しました

設定
0/400
設定を保存しました
エラーが発生しました
※文字以内
ブックマークを解除しました。

エラーが発生しました。

エラーの原因がわからない場合はヘルプセンターをご確認ください。

ブックマーク機能を使うにはログインしてください。
架空論文集  作者: 哀しみ
2/11

The Subjectless Language of AI and the Fallacy of Perception: A Philosophical and Empirical Paradigm for Societal Risk Analysis

Abstract

This paper proposes the Pseudo-Subjectification Cycle (PSC), a novel theoretical framework to analyze societal risks arising from the fallacy of perception in human interactions with subjectless technologies, focusing on artificial intelligence (AI). The PSC posits that AI’s transformer-driven outputs, lacking intent, prompt humans to project six meanings—cognitive, use-contextual, intentional, existential, relational, and logical—triggering pseudo-subjectification, judgment dependency, and institutionalization, risking systemic instability. Integrating language philosophy (Wittgenstein, 1988; Austin, 1962), ontology (Heidegger, 1962), cognitive science (Turkle, 2011; Skitka et al., 1999), sociology (Latour, 2005), and ethics (Coeckelbergh, 2020), the study leverages a 2025 global survey (n=2,500, 10 countries), interviews (n=80), a 12-month longitudinal study (n=800), and a verification experiment (n=500) to quantify trust erosion (15–32%) and define instability thresholds (40%). Comparisons with advertising, religious narratives, political rhetoric, propaganda, and IoT applications generalize PSC’s applicability across technologies and cultures. The proposition is conditionally valid, mitigated by the EU AI Act (2025) and a 10% reduction in AI diagnostic errors (WHO, 2024), but persistent distrust (45% medical AI, X analysis, 2025) underscores urgency. Recommendations include globally scalable AI literacy curricula, mandatory algorithmic audits, and an ISO-aligned ethical certification framework, positioning PSC as a paradigm for AI ethics, philosophy of technology, and beyond, with pathways to classical framework status through ongoing verification.

Keywords: Pseudo-Subjectification Cycle, Artificial Intelligence, Fallacy of Perception, Trust Erosion, Systemic Instability, Philosophy of Technology

1. Introduction

1.1 Background of the Problem

AI language models (e.g., ChatGPT, Grok) and other subjectless technologies (e.g., IoT, autonomous vehicles) have transformed global interactions, with the AI market projected to exceed $300 billion by 2025 (Statista, 2025). These systems generate outputs—statistical imitations of human speech or behavior—lacking intent, experience, or responsibility, forming a subjectless language distinct from Wittgenstein’s (1988) context-driven language games or Austin’s (1962) intent-laden speech acts. Humans, driven by cognitive imperatives, misinterpret outputs like “I recommend Restaurant A” or “Smart thermostat adjusted” as intentional, triggering actions (e.g., reservations, energy adjustments). This fallacy of perception—misattributing meaning to the meaningless—distorts judgment and erodes trust.

The societal stakes are evident globally. Social media algorithms amplified misinformation during the 2020 U.S. election, affecting 57% of users (MIT, 2021). A 2025 X post analysis reports a 45% rise in medical AI distrust due to misdiagnosis fears (X analysis, 2025). A 2025 global survey (n=2,500, U.S., Japan, EU, India, Brazil, Nigeria, Indonesia, UAE, Kenya, Mexico) conducted for this study found 68% of users attribute intent to AI outputs, with trust erosion of 15–32% post-error (p<0.01). Interviews (n=80) liken AI to “a slick ad campaign,” “religious dogma,” “political spin,” “propaganda machine,” or “overzealous smart devices,” paralleling historical patterns in advertising (e.g., 1950s cigarette campaigns), religion (e.g., Reformation indulgences), rhetoric (e.g., Cold War speeches), propaganda (e.g., WWII broadcasts), and IoT (e.g., smart home misconfigurations). This crisis, rooted in subjectless technologies clashing with human meaning-making, risks systemic instability, demanding a universal theoretical paradigm.

1.2 Research Purpose and Proposition

This study introduces the Pseudo-Subjectification Cycle (PSC), a paradigm synthesizing the subjectless outputs of technologies, the fallacy of perception, and societal risks across AI, IoT, and beyond. It tests the proposition that pervasive adoption of subjectless technologies triggers trust erosion and institutional dysfunction via pseudo-subjectification, judgment dependency, and institutionalization. The PSC models a cyclical process applicable to multiple domains, verified through a 2025 global survey (n=2,500), interviews (n=80), a 12-month longitudinal study (n=800), a verification experiment (n=500), and comparisons with advertising, religion, rhetoric, propaganda, and IoT. Leveraging policy developments (EU AI Act, India’s AI Strategy, Kenya’s Tech Policy), it validates the proposition and proposes countermeasures, positioning PSC as a candidate for classical framework status in AI ethics, philosophy of technology, cognitive science, and sociology.

1.3 Theoretical Framework

The PSC integrates language philosophy (Wittgenstein, 1988; Austin, 1962; Bakhtin, 1981), ontology (Heidegger, 1962), cognitive science (Turkle, 2011; Skitka et al., 1999), sociology (Latour, 2005), and ethics (Coeckelbergh, 2020). Unlike Wittgenstein’s intent-based language games or Austin’s agentive speech acts, subjectless technologies (AI, IoT) produce meaningless simulations. Bakhtin’s dialogism highlights their relational absence, and Heidegger frames their non-existence. Cognitive biases—anthropomorphism (Turkle, 2011) and automation bias (Skitka et al., 1999)—drive meaning projection, while Latour’s (2005) institutionalization and Coeckelbergh’s (2020) responsibility dilution explain systemic risks. The PSC’s novelty lies in its cyclical model: projection → dependency → institutionalization → instability, applicable beyond AI to IoT and autonomous systems. Detailed comparisons with advertising (Schudson, 1984; 1950s campaigns), religious narratives (Berger, 1967; Reformation), political rhetoric (Edelman, 1988; Cold War), propaganda (Ellul, 1965; WWII), and IoT (e.g., 2020s smart home failures) ground PSC’s universal applicability, fostering cross-disciplinary citation potential.

2. Methodology

The study employs a mixed-methods approach, integrating philosophical reasoning, empirical data, longitudinal modeling, verification experiments, and comparative case studies. The PSC systematizes subjectless outputs and human meaning-making via six meaning categories (cognitive, use-contextual, intentional, existential, relational, logical), grounded in Wittgenstein, Austin, and Bakhtin. Technical analyses of transformer algorithms (Vaswani et al., 2017) and IoT protocols (e.g., MQTT, 2020) clarify meaning absence, while a logical proof (definitions, axioms, lemmas, theorems) tests the proposition. Case studies (AI recommendations, medical diagnostics, social media, governance, IoT smart homes) illustrate impacts.

Empirical Data:

•Survey: A 2025 global survey (n=2,500; 10 countries; stratified by age, occupation, AI/IoT exposure, urban/rural, income) measures trust in subjectless technologies across healthcare, education, media, governance, and smart homes. Conducted via online and in-person panels (January–May 2025), it reports 68% intent attribution (95% CI: 65–71%), 15–32% trust erosion post-error, and a 40% instability threshold (logistic regression, p<0.01; Cronbach’s α=0.93). Sampling ensured cultural diversity, with oversampling in Kenya and Indonesia.

•Longitudinal Study: A 12-month follow-up (n=800, subset of survey respondents) tracks trust erosion using a Markov chain model, predicting instability at 40% erosion (p<0.01). Data were collected quarterly (June 2025–May 2026).

•Interviews: Semi-structured interviews (n=80, users of AI/IoT in 10 countries) explore perceptions, coded thematically (NVivo). Themes include “ad-like deception,” “religious absolutism,” “political manipulation,” “propaganda orchestration,” and “IoT overreach.”

•Verification Experiment: A controlled experiment (n=500, U.S. and India, June 2025) tests PSC’s applicability by exposing participants to AI and IoT outputs, measuring intent attribution (70%, p<0.01) and trust erosion (18%, p<0.05). Conducted by an independent team to ensure re-presentability.

•Secondary Data: EU AI Act (2025), India’s AI Strategy (2025), Kenya’s Tech Policy (2025), X posts (45% medical AI distrust, 2025), WHO (10% diagnostic error reduction, 2024), UNESCO (60% AI grading adoption, 2024), MIT (57% misinformation exposure, 2021).

Comparative Case Studies:

•Advertising: 1950s cigarette campaigns manipulated trust via emotional branding (Schudson, 1984).

•Religion: Reformation indulgences projected divine authority (Berger, 1967).

•Political Rhetoric: Cold War speeches used symbolic fear (Edelman, 1988).

•Propaganda: WWII radio broadcasts orchestrated compliance (Ellul, 1965).

•IoT: 2020s smart home failures (e.g., privacy breaches) eroded trust (IEEE, 2023).

The analysis sequence—technical scrutiny, PSC mapping, risk chain construction, proposition proof, longitudinal modeling, verification experiment, countermeasure proposal—ensures rigor and falsifiability. Ethical approval was obtained, and data privacy complied with GDPR, Japan’s APPI, Nigeria’s NDPR, and India’s DPDP.

3. Discussion

3.1 Subjectless Technologies: Structural Imitation and Absence of Meaning

Transformer algorithms (AI) and IoT protocols generate subjectless outputs (Axiom 1: Technologies lack intent) by processing inputs (e.g., “restaurant,” “temperature”) and producing probabilistic or rule-based outputs (e.g., “I recommend Restaurant A,” “Thermostat adjusted”). These lack sensory experience or intent, embedding designer biases via reinforcement learning (AI) or firmware (IoT). The PSC’s six meaning categories reveal that AI and IoT mimic cognitive and use-contextual meanings but lack intentional, existential, relational, and logical depth, akin to 1950s ad slogans, Reformation texts, Cold War rhetoric, WWII propaganda, and IoT misconfigurations.

3.2 Fallacy of Perception: The Pseudo-Subjectification Cycle

The PSC models how humans project meaning onto subjectless outputs (Axiom 2: Humans generate meaning), initiating pseudo-subjectification (Lemma 1). For example, AI’s “I recommend Restaurant A” or IoT’s “Thermostat adjusted” prompts actions, driven by Turkle’s (2011) anthropomorphism, Skitka et al.’s (1999) automation bias, personalization, and opacity. The six meanings integrate: cognitive (denoting objects), use-contextual (prompting action), intentional (projected choice), existential (life significance), relational (social interaction), and logical (assumed rationality). The 2025 survey (n=2,500) finds 68% intent attribution (p<0.01), with 15–32% trust erosion post-error. Interviews (n=80) describe outputs as “ad-like,” “dogmatic,” “spun,” “propagandistic,” or “intrusive IoT.”

Real-world cases amplify risks:

•Healthcare: AI’s “no abnormality” risks overtrust, with 45% distrust rise (X analysis, 2025). WHO (2024) notes 10% error reduction, yet 32% trust erosion occurs post-misdiagnosis (survey, 2025; p<0.01).

•Social Media: Biased algorithms affected 57% of users (MIT, 2021), with 72% distrusting AI post-bias (survey, 2025).

•Governance: Predictive policing perpetuates bias, with 78% distrust in Kenya and Indonesia (survey, 2025).

•IoT: Smart home breaches erode trust by 25% (survey, 2025).

Historical analogs and IoT parallels illustrate trust manipulation, but subjectless technologies’ global scale drives unique risks, fueling judgment dependency (Lemma 2).

3.3 Chain Reactions and Societal Risks

The PSC’s second phase, judgment dependency, shifts decision-making to technologies, diluting responsibility. The survey (2025) finds 62% of professionals defer to AI/IoT, with 22% trust erosion post-error (p<0.01). Opacity obscures verification, echoing ad manipulation, religious authority, rhetorical ambiguity, propaganda control, and IoT unreliability (Latour, 2005). The third phase, institutionalization, standardizes hollow meaning (Lemma 3). AI grading (60%, UNESCO, 2024), AI diagnoses (WHO, 2024), biased policing (X analysis, 2025), and IoT infrastructure (IEEE, 2023) reduce agency. Interviews highlight “dehumanized systems” across cultures.

The longitudinal study (n=800) models trust erosion over 12 months, predicting instability at 40% erosion (Markov chain, p<0.01). The verification experiment (n=500) confirms PSC’s applicability (p<0.05). Risks include judicial bias, educational stagnation, or a “zombie society.” Historical parallels and IoT failures underscore the stakes, with subjectless technologies amplifying threats.

4. Proposition Proof

The proposition is tested logically:

•Definitions: Subjectless outputs are intent-free (D1); meaning includes intent, context, existence (D2); fallacy of perception is misattribution (D3); systemic instability is trust/institutional breakdown (D4).

•Axioms: A1: Technologies lack intent (Vaswani et al., 2017); A2: Humans generate meaning (Wittgenstein, 1988).

•Lemmas: L1: Pseudo-subjectification projects meaning; L2: Judgment dependency dilutes responsibility; L3: Institutionalization risks instability.

•Auxiliary Conditions: 68% intent attribution (survey), 62% professional deference (survey), 60% institutional adoption (UNESCO, 2024).

•Proof: Meaning absence (A1) prompts projection (A2, L1), leading to dependency (L2) and institutionalized risks (L3). The proposition is valid, with 40% trust erosion as the threshold (longitudinal model, experiment).

5. Re-evaluation and Limitations

The PSC is logically and empirically robust. The survey (n=2,500) quantifies trust erosion (15–32%, p<0.01), the longitudinal study (n=800) predicts instability at 40% (p<0.01), and the verification experiment (n=500) confirms re-presentability (p<0.05). The EU AI Act (2025), India’s AI Strategy (2025), and Kenya’s Tech Policy promote mitigation, but 45% medical AI distrust (X analysis, 2025) and 78% governance skepticism (survey, 2025) persist. PSC’s applicability to AI and IoT, grounded in global data and historical analogs, supports its potential as a classical framework.

Limitations:

•Cultural Specificity: Despite global sampling, indigenous knowledge systems (e.g., African orality) require deeper integration.

•Technological Scope: PSC applies to AI and IoT; broader testing (e.g., robotics) is needed.

•Verification Scale: The experiment (n=500) is robust but requires replication across more contexts.

•Longitudinal Horizon: 12 months captures trends; multi-year studies would strengthen predictions.

Future research should test PSC on robotics, replicate experiments globally, integrate indigenous perspectives, and extend longitudinal studies to five years. Workshops at venues like ACM Ethics in AI or UNESCO AI Summits will foster cross-disciplinary citation.

6. Conclusion

The Pseudo-Subjectification Cycle offers a paradigm for analyzing risks from subjectless technologies, integrating AI’s subjectless language, the fallacy of perception, and systemic instability. Empirical data (2025 survey, n=2,500; interviews, n=80; longitudinal study, n=800; experiment, n=500) quantify trust erosion (15–32%) and instability thresholds (40%), while comparisons with advertising, religion, rhetoric, propaganda, and IoT clarify universal applicability. Mitigation via global policies is promising, but persistent distrust demands action. Recommendations include:

•Globally Scalable Literacy: UNESCO-aligned curricula, piloted in Kenya and Indonesia by 2027.

•Mandatory Audits: EU AI Act-compliant protocols, with global adoption by 2026.

•ISO Ethical Certification: A standard enforced by 2028, ensuring accountability.

The PSC’s rigorous verification positions it for classical framework status, with ongoing global experiments and interdisciplinary workshops ensuring citation across AI ethics, philosophy, cognitive science, and sociology.

References

Austin, J. L. (1962). How to Do Things with Words. Oxford University Press.Bakhtin, M. M. (1981). The Dialogic Imagination. University of Texas Press.Berger, P. L. (1967). The Sacred Canopy. Doubleday.Coeckelbergh, M. (2020). AI Ethics. MIT Press.Edelman, M. (1988). Constructing the Political Spectacle. University of Chicago Press.Ellul, J. (1965). Propaganda: The Formation of Men’s Attitudes. Knopf.Heidegger, M. (1962). Being and Time. Harper & Row.IEEE. (2023). IoT Security Report.Latour, B. (2005). Reassembling the Social. Oxford University Press.MIT. (2021). Social Media and Misinformation Report.Schudson, M. (1984). Advertising, The Uneasy Persuasion. Basic Books.Skitka, L. J., et al. (1999). Does automation bias decision-making? International Journal of Human-Computer Studies, 51(5), 991–1006.Statista. (2025). Global AI Market Forecast.Turkle, S. (2011). Alone Together. Basic Books.UNESCO. (2024). AI in Education Report.Vaswani, A., et al. (2017). Attention is all you need. NeurIPS.Wittgenstein, L. (1988). Philosophical Investigations. Blackwell Publishing.European Commission. (2025). Artificial Intelligence in Healthcare.Government of India. (2025). National AI Strategy.Government of Kenya. (2025). National Tech Policy.WHO. (2024). Global Health Technology Report.X analysis. (2025). AI Distrust Trends, April 18.

評価をするにはログインしてください。
この作品をシェア
Twitter LINEで送る
ブックマークに追加
ブックマーク機能を使うにはログインしてください。
― 新着の感想 ―
このエピソードに感想はまだ書かれていません。
感想一覧
+注意+

特に記載なき場合、掲載されている作品はすべてフィクションであり実在の人物・団体等とは一切関係ありません。
特に記載なき場合、掲載されている作品の著作権は作者にあります(一部作品除く)。
作者以外の方による作品の引用を超える無断転載は禁止しており、行った場合、著作権法の違反となります。

この作品はリンクフリーです。ご自由にリンク(紹介)してください。
この作品はスマートフォン対応です。スマートフォンかパソコンかを自動で判別し、適切なページを表示します。

↑ページトップへ