{"id":86702,"date":"2026-01-21T21:58:31","date_gmt":"2026-01-21T21:58:31","guid":{"rendered":"https:\/\/hanstimmerman.me\/?p=86702"},"modified":"2026-01-21T22:02:06","modified_gmt":"2026-01-21T22:02:06","slug":"menselijke-versus-kunstmatige-intelligentie","status":"publish","type":"post","link":"https:\/\/hanstimmerman.me\/nl_nl\/menselijke-versus-kunstmatige-intelligentie\/","title":{"rendered":"Menselijke versus kunstmatige intelligentie"},"content":{"rendered":"<p style=\"text-align: right;\"><span style=\"color: #000000;\"><em>English version: scroll down<\/em><\/span><\/p>\n<p><span style=\"color: #000000;\"><b>Dezelfde uitkomst, een totaal andere redenatie<\/b><b><\/b><\/span><\/p>\n<p><span style=\"color: #000000;\">In een <a href=\"https:\/\/www.linkedin.com\/posts\/walterquattrociocchi_ive-never-had-two-editorials-in-top-tier-activity-7399375954743123968-Sn9Y\/?utm_medium=ios_app&amp;rcm=ACoAAAA27-wB19j2cQJjCsK8SuD-fNA-AEnfBQE&amp;utm_source=social_share_send&amp;utm_campaign=mail\">recent<\/a> LinkedIn-artikel werd scherp blootgelegd hoe fundamenteel verschillend menselijke en kunstmatige intelligentie eigenlijk functioneren. Niet zozeer in wat ze produceren, maar in <i>hoe<\/i> ze tot die productie komen. Mensen en grote taalmodellen (LLMs) kunnen zinnen schrijven die sterk op elkaar lijken, vergelijkbare beoordelingen geven en soms zelfs identieke conclusies trekken.<\/span><\/p>\n<p><span style=\"color: #000000;\">Die gelijkenis is echter oppervlakkig. Onder de motorkap gaapt een ontologische kloof: een fundamenteel verschil in de aard van hun bestaan. Menselijke intelligentie ontstaat uit een belichaamd, ervaringsrijk leven in de wereld. Kunstmatige intelligentie daarentegen berust op statistische patronen tussen symbolen \u2014 zonder lichaam, zonder ervaring, zonder eigen verhouding tot de werkelijkheid.<\/span><\/p>\n<p><span style=\"color: #000000;\"><b>Oordeel is belichaamd<\/b><\/span><\/p>\n<p><span style=\"color: #000000;\">Menselijk oordeel ontstaat uit een geleefd leven. Het wordt gevormd door lichamelijke ervaring, emoties, herinneringen, sociale interacties, morele intu\u00efties en intenties. Een mens oordeelt niet alleen <i>over<\/i> de wereld, maar altijd <i>in<\/i> de wereld \u2014 met een lichaam, in een tijdsverloop, met persoonlijke inzet en verantwoordelijkheid.<\/span><\/p>\n<p><span style=\"color: #000000;\">Een taalmodel heeft niets van dat alles. Het kent geen ervaring, geen lichaam, geen tijdsbesef, geen intentie. Het verwerkt tekst door die op te knippen in tokens \u2014 kleine, op zichzelf betekenisloze eenheden \u2014 en berekent welke woorden statistisch het meest waarschijnlijk op elkaar volgen. Betekenis ontstaat niet uit beleefde realiteit, maar uit waarschijnlijkheidsverdelingen over enorme tekstkorpora. Woorden verwijzen niet naar de wereld, maar naar andere woorden.<\/span><\/p>\n<p style=\"text-align: center;\"><em><span style=\"color: #000000;\">En toch: de output kan verbluffend menselijk lijken.<\/span><\/em><\/p>\n<p><span style=\"color: #000000;\"><b>Wanneer plausibiliteit kennis begint te vervangen<\/b><\/span><\/p>\n<p><span style=\"color: #000000;\">Wanneer radicaal verschillende processen tot vrijwel identieke taal leiden, verschuift het probleem van technologie naar epistemologie: de vraag wat we eigenlijk als kennis accepteren.<\/span><\/p>\n<p><span style=\"color: #000000;\">Het risico zit niet primair in leugens \u2014 taalmodellen liegen zelden bewust \u2014 maar in de illusie die ontstaat wanneer vloeiende, coherente taal verificatie begint te vervangen. Wanneer de <i>vorm<\/i> van kennis \u2014 overtuigend, grammaticaal perfect, zelfverzekerd \u2014 de daadwerkelijke arbeid van kennen overschaduwt.<\/span><\/p>\n<p><span style=\"color: #000000;\">Dit is geen academisch detail, maar een praktische waarschuwing voor iedereen die AI inzet voor evaluatie, advies of oordeel.<\/span><\/p>\n<p><span style=\"color: #000000;\"><b>Een oude droom in een nieuwe gedaante<\/b><\/span><\/p>\n<p><span style=\"color: #000000;\">Het verlangen om denken mechanisch te reproduceren is oud. Griekse mythen verhalen over de gouden automaten van Hephaestus. De Joodse golem werd tot leven gewekt door letters. In de middeleeuwen droomde Ramon Llull van logische machines; Leibniz fantaseerde over een <i>calculus ratiocinator<\/i> die conflicten oplost met symbolen. Descartes en La Mettrie zagen de mens zelf als machine.<\/span><\/p>\n<p><span style=\"color: #000000;\">Alan Turing maakte deze droom in 1950 meetbaar met een pragmatische vraag: kan een machine gedrag vertonen dat niet van menselijk gedrag te onderscheiden is? Daarmee werd intelligentie gereduceerd tot imiteerbaar gedrag.<\/span><\/p>\n<p style=\"text-align: center;\"><em><span style=\"color: #000000;\">En precies daar staan we nu.<\/span><\/em><\/p>\n<p><span style=\"color: #000000;\"><b>De studie van Loru et al.: een systematische blik<\/b><\/span><\/p>\n<p><span style=\"color: #000000;\">Een recente <a style=\"color: #000000;\" href=\"http:\/\/www.apple.com\/uk\">publicatie <\/a>in<\/span> <i style=\"color: #000000;\">Proceedings of the National Academy of Sciences<\/i><span style=\"color: #000000;\"> biedt hier zeldzame helderheid: <\/span><b style=\"color: #000000;\">Loru et al. (2025)<\/b><span style=\"color: #000000;\">. De onderzoekers lieten zes grote taalmodellen \u2014 waaronder varianten van ChatGPT, Gemini, Llama en Mistral \u2014 en mensen dezelfde taak uitvoeren: nieuwsbronnen beoordelen op betrouwbaarheid en bias, volgens identieke stappen (criteria kiezen, inhoud ophalen, uitleg geven).<\/span><\/p>\n<p><span style=\"color: #000000;\">Die beoordelingen werden vergeleken met expertanalyses van onder meer NewsGuard en Media Bias\/Fact Check. Op het eerste gezicht presteren de modellen goed: de alignment met experts is vaak hoog.<\/span><\/p>\n<p style=\"text-align: center;\"><em><span style=\"color: #000000;\">Maar onder de oppervlakte verschijnen structurele verschillen.<\/span><\/em><\/p>\n<p><span style=\"color: #000000;\">Modellen leunen zwaar op lexicale patronen \u2014 woordkeuze, stijl, toon \u2014 in plaats van inhoudelijke context. Er ontstaan politieke asymmetrie\u00ebn, waarbij bepaalde ideologische richtingen consistent als betrouwbaarder worden beoordeeld. En modellen blijken geneigd lingu\u00efstische elegantie te verwarren met epistemische kwaliteit.<\/span><\/p>\n<p><span style=\"color: #000000;\">De auteurs noemen dit fenomeen <b>\u201cepistemia\u201d<\/b>: de illusie van kennis wanneer plausibiliteit verificatie vervangt \u2014 of, in mooiere woorden, het overtuigend verbloemen van een inhoudelijk gebrek aan begrip.<\/span><\/p>\n<p><span style=\"color: #000000;\"><b>Twee paden naar hetzelfde antwoord\u00a0<\/b><\/span><\/p>\n<p><img data-recalc-dims=\"1\" decoding=\"async\" class=\"attachment-266x266 alignright\" src=\"https:\/\/i0.wp.com\/hanstimmerman.me\/wp-content\/uploads\/2026\/01\/verschil-HI-en-AI.jpg?resize=207%2C306&#038;ssl=1\" sizes=\"(max-width: 180px) 100vw, 180px\" srcset=\"https:\/\/i0.wp.com\/hanstimmerman.me\/wp-content\/uploads\/2026\/01\/verschil-HI-en-AI.jpg?w=558&amp;ssl=1 558w, https:\/\/i0.wp.com\/hanstimmerman.me\/wp-content\/uploads\/2026\/01\/verschil-HI-en-AI.jpg?resize=203%2C300&amp;ssl=1 203w, https:\/\/i0.wp.com\/hanstimmerman.me\/wp-content\/uploads\/2026\/01\/verschil-HI-en-AI.jpg?resize=8%2C12&amp;ssl=1 8w\" alt=\"\" width=\"207\" height=\"306\" \/><\/p>\n<p><img class=\"alignright\" \/><\/p>\n<p><img class=\"alignright\" \/><\/p>\n<p><span style=\"color: #000000;\"><b>Het menselijke pad<\/b><\/span><br \/>\n<span style=\"color: #000000;\">Informatie komt binnen via zintuigen \u2192 wordt gekoppeld aan herinneringen, emoties en waarden \u2192 verwerkt in sociale en morele context \u2192 leidt tot een oordeel dat onvermijdelijk gekleurd is door wie je bent.<\/span><\/p>\n<p><span style=\"color: #000000;\"><b>Het model-pad<\/b><\/span><br \/>\n<span style=\"color: #000000;\">Tekst \u2192 tokenisatie \u2192 statistische voorspelling van het volgende token \u2192 laag na laag waarschijnlijkheidsberekening \u2192 output die volgens trainingsdata het meest \u201cmenselijk\u201d aandoet.<\/span><\/p>\n<p><span style=\"color: #000000;\">De uitkomst kan identiek lijken.<\/span><br \/>\n<span style=\"color: #000000;\">Het proces is radicaal onpersoonlijk.<\/span><\/p>\n<p><span style=\"color: #000000;\"><b>Implicaties: waar we nu staan<\/b><\/span><\/p>\n<p><span style=\"color: #000000;\">AI ondersteunt inmiddels talloze schaalbare taken: samenvattingen maken, patronen herkennen, hypotheses genereren, synthetische data produceren. Dat is waardevol.<\/span><\/p>\n<p><span style=\"color: #000000;\">Maar zodra we daadwerkelijk <i>oordeel<\/i> delegeren \u2014 in onderwijs, journalistiek, rechtspraak, beleid of zorg \u2014 moeten we ons realiseren wat we uitbesteden: geen inzicht, maar een plausibele simulatie ervan. De studie laat zien dat die simulatie overtuigend kan zijn, maar systematisch afwijkt op punten die ertoe doen: bias, contextgevoeligheid en normatieve diepgang.<\/span><\/p>\n<p><span style=\"color: #000000;\"><b>Slot<\/b><\/span><\/p>\n<p><span style=\"color: #000000;\">De uitdaging is niet technologisch \u2014 modellen zullen onvermijdelijk beter worden.<\/span> <span style=\"color: #000000;\">De uitdaging is menselijk: blijven we scherp zien wat AI n\u00ed\u00e9t is? <\/span><span style=\"color: #000000;\">Wie hier vaker over nadenkt, herkent dit spanningsveld. T<\/span><span style=\"color: #000000;\">echniek presenteert zich zelden als neutraal instrument, zeker niet wanneer zij zich vermomt als oordelend wezen.<\/span><\/p>\n<p><strong><span style=\"color: #000000;\">De vraag is niet of kunstmatige intelligentie slim genoeg wordt.<\/span> <span style=\"color: #000000;\">De vraag is of wij wijs genoeg blijven om het verschil te blijven zien.<\/span><\/strong><\/p>\n<p>*) Proceedings of the National Academy of Sciences biedt hier zeldzame helderheid: Loru, E. et al. (2025). The simulation of judgment in LLMs. PNAS 122(42): e2518443122. https:\/\/doi.org\/10.1073\/pnas.2518443122<\/p>\n<p><span style=\"color: #000000;\">Photo by <a href=\"https:\/\/unsplash.com\/@eclipticgraphic?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText\">Ecliptic Graphic<\/a> on <a href=\"https:\/\/unsplash.com\/photos\/a-computer-circuit-board-with-a-brain-on-it-_jg8xh2SsXQ?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText\">Unsplash<\/a><\/span><\/p>\n<p style=\"text-align: center;\"><span style=\"color: #000000;\">&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;- \u00a0Translated by ChatGPT \u00a0&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;<\/span><\/p>\n<h2 data-start=\"181\" data-end=\"219\"><span style=\"color: #000000;\">Human vs. Artificial Intelligence<\/span><\/h2>\n<p data-start=\"220\" data-end=\"267\"><span style=\"color: #000000;\"><strong data-start=\"220\" data-end=\"267\">Same outcome, radically different reasoning<\/strong><\/span><\/p>\n<p data-start=\"269\" data-end=\"589\"><span style=\"color: #000000;\">A <a href=\"https:\/\/www.linkedin.com\/posts\/walterquattrociocchi_ive-never-had-two-editorials-in-top-tier-activity-7399375954743123968-Sn9Y\/?utm_medium=ios_app&amp;rcm=ACoAAAA27-wB19j2cQJjCsK8SuD-fNA-AEnfBQE&amp;utm_source=social_share_send&amp;utm_campaign=mail\">recent<\/a> LinkedIn article highlighted a fundamental difference between human and artificial intelligence. Not so much in <em data-start=\"390\" data-end=\"396\">what<\/em> they produce, but in <em data-start=\"418\" data-end=\"423\">how<\/em> they arrive there. Humans and large language models (LLMs) can generate similar sentences, reach comparable judgments, and sometimes even draw identical conclusions.<\/span><\/p>\n<p data-start=\"591\" data-end=\"987\"><span style=\"color: #000000;\">That similarity, however, is superficial. Beneath the surface lies an ontological gap \u2014 a fundamental difference in the nature of their existence. Human intelligence emerges from an embodied, experiential life in the world. Artificial intelligence, by contrast, is built on statistical relationships between symbols \u2014 without a body, without experience, without any lived relationship to reality.<\/span><\/p>\n<h2 data-start=\"994\" data-end=\"1017\"><span style=\"color: #000000;\">Judgment is embodied<\/span><\/h2>\n<p data-start=\"1019\" data-end=\"1328\"><span style=\"color: #000000;\">Human judgment arises from a lived life. It is shaped by bodily experience, emotions, memories, social interaction, moral intuition, and intention. Humans do not judge <em data-start=\"1187\" data-end=\"1194\">about<\/em> the world from the outside; they judge <em data-start=\"1234\" data-end=\"1242\">within<\/em> the world \u2014 with a body, over time, and with personal involvement and responsibility.<\/span><\/p>\n<p data-start=\"1330\" data-end=\"1746\"><span style=\"color: #000000;\">A language model has none of this. It has no experience, no body, no sense of time, no intention. It processes text by breaking it down into tokens \u2014 small, intrinsically meaningless units \u2014 and calculating which words are statistically most likely to follow. Meaning does not arise from lived reality, but from probability distributions across vast text corpora. Words do not point to the world, but to other words.<\/span><\/p>\n<p data-start=\"1748\" data-end=\"1794\"><span style=\"color: #000000;\">And yet, the output can feel strikingly human.<\/span><\/p>\n<h2 data-start=\"1801\" data-end=\"1849\"><span style=\"color: #000000;\">When plausibility begins to replace knowledge<\/span><\/h2>\n<p data-start=\"1851\" data-end=\"2013\"><span style=\"color: #000000;\">When radically different processes produce nearly identical language, the problem shifts from technology to epistemology: what do we actually accept as knowledge?<\/span><\/p>\n<p data-start=\"2015\" data-end=\"2328\"><span style=\"color: #000000;\">The risk does not primarily lie in falsehoods \u2014 models rarely \u201clie\u201d intentionally \u2014 but in the illusion created when fluent, coherent language begins to substitute for verification. When the <em data-start=\"2206\" data-end=\"2212\">form<\/em> of knowledge \u2014 persuasive, grammatically perfect, authoritative-sounding \u2014 overshadows the actual labor of knowing.<\/span><\/p>\n<p data-start=\"2330\" data-end=\"2445\"><span style=\"color: #000000;\">This is not an academic concern. It is a practical warning for anyone using AI for evaluation, advice, or judgment.<\/span><\/p>\n<h2 data-start=\"2452\" data-end=\"2481\"><span style=\"color: #000000;\">An old dream in a new form<\/span><\/h2>\n<p data-start=\"2483\" data-end=\"2839\"><span style=\"color: #000000;\">The desire to mechanize thinking is ancient. Greek myths tell of Hephaestus\u2019 golden automatons. The Jewish golem was animated through letters. In the Middle Ages, Ramon Llull dreamed of logical machines; Leibniz imagined a <em data-start=\"2706\" data-end=\"2729\">calculus ratiocinator<\/em> capable of resolving disputes through symbols. Descartes and La Mettrie viewed humans themselves as machines.<\/span><\/p>\n<p data-start=\"2841\" data-end=\"3014\"><span style=\"color: #000000;\">In 1950, Alan Turing reframed this dream pragmatically: can a machine exhibit behavior indistinguishable from that of a human? Intelligence was reduced to imitable behavior.<\/span><\/p>\n<p data-start=\"3016\" data-end=\"3064\"><span style=\"color: #000000;\">That is precisely where we find ourselves today.<\/span><\/p>\n<h2 data-start=\"3071\" data-end=\"3117\"><span style=\"color: #000000;\">The study by Loru et al.: a systematic view<\/span><\/h2>\n<p data-start=\"3119\" data-end=\"3524\"><span style=\"color: #000000;\">A recent publication *) in <em data-start=\"3137\" data-end=\"3186\">Proceedings of the National Academy of Sciences<\/em> offers rare clarity: <strong data-start=\"3208\" data-end=\"3230\">Loru et al. (2025)<\/strong>. The researchers asked six major language models \u2014 including versions of ChatGPT, Gemini, Llama, and Mistral \u2014 and humans to perform the same task: evaluating news sources for reliability and bias, following identical steps (selecting criteria, retrieving content, and providing explanations).<\/span><\/p>\n<p data-start=\"3526\" data-end=\"3735\"><span style=\"color: #000000;\">Their results were compared with expert assessments from organizations such as NewsGuard and Media Bias\/Fact Check. At first glance, the models perform well, often showing high alignment with expert judgments.<\/span><\/p>\n<p data-start=\"3737\" data-end=\"3792\"><span style=\"color: #000000;\">But beneath the surface, systematic differences emerge.<\/span><\/p>\n<p data-start=\"3794\" data-end=\"4071\"><span style=\"color: #000000;\">Models rely heavily on lexical cues \u2014 wording, style, tone \u2014 rather than deeper contextual understanding. Political asymmetries appear, with certain ideological positions consistently rated as more reliable. And linguistic elegance is frequently mistaken for epistemic quality.<\/span><\/p>\n<p data-start=\"4073\" data-end=\"4272\"><span style=\"color: #000000;\">The authors call this phenomenon <strong data-start=\"4106\" data-end=\"4121\">\u201cepistemia\u201d<\/strong>: the illusion of knowledge that arises when plausibility replaces verification \u2014 a polished way of sounding knowledgeable without truly understanding.<\/span><\/p>\n<h2 data-start=\"4279\" data-end=\"4310\"><span style=\"color: #000000;\">Two paths to the same answer<\/span><\/h2>\n<p data-start=\"4312\" data-end=\"4517\"><img data-recalc-dims=\"1\" decoding=\"async\" class=\"attachment-266x266 size-266x266 alignright\" src=\"https:\/\/i0.wp.com\/hanstimmerman.me\/wp-content\/uploads\/2026\/01\/verschil-HI-en-AI.jpg?resize=180%2C266&#038;ssl=1\" sizes=\"(max-width: 180px) 100vw, 180px\" srcset=\"https:\/\/i0.wp.com\/hanstimmerman.me\/wp-content\/uploads\/2026\/01\/verschil-HI-en-AI.jpg?w=558&amp;ssl=1 558w, https:\/\/i0.wp.com\/hanstimmerman.me\/wp-content\/uploads\/2026\/01\/verschil-HI-en-AI.jpg?resize=203%2C300&amp;ssl=1 203w, https:\/\/i0.wp.com\/hanstimmerman.me\/wp-content\/uploads\/2026\/01\/verschil-HI-en-AI.jpg?resize=8%2C12&amp;ssl=1 8w\" alt=\"\" width=\"180\" height=\"266\" \/><\/p>\n<p data-start=\"4312\" data-end=\"4517\"><span style=\"color: #000000;\"><strong data-start=\"4312\" data-end=\"4330\">The human path<\/strong><\/span><br data-start=\"4330\" data-end=\"4333\" \/><span style=\"color: #000000;\">Information enters through the senses \u2192 is connected to memories, emotions, and values \u2192 processed within social and moral contexts \u2192 results in a judgment shaped by who the person is.<\/span><\/p>\n<p data-start=\"4519\" data-end=\"4712\"><span style=\"color: #000000;\"><strong data-start=\"4519\" data-end=\"4537\">The model path<\/strong><\/span><br data-start=\"4537\" data-end=\"4540\" \/><span style=\"color: #000000;\">Text \u2192 tokenization \u2192 statistical prediction of the next token \u2192 layer upon layer of probability calculations \u2192 output that appears most \u201chuman\u201d according to training data.<\/span><\/p>\n<p data-start=\"4714\" data-end=\"4783\"><span style=\"color: #000000;\">The outcome may look the same.<\/span><br data-start=\"4744\" data-end=\"4747\" \/><span style=\"color: #000000;\">The process is radically impersonal.<\/span><\/p>\n<h2 data-start=\"4790\" data-end=\"4825\"><span style=\"color: #000000;\">Implications: where we stand now<\/span><\/h2>\n<p data-start=\"4827\" data-end=\"4969\"><span style=\"color: #000000;\">AI already supports many scalable tasks: summarization, pattern recognition, hypothesis generation, synthetic data creation. That is valuable.<\/span><\/p>\n<p data-start=\"4971\" data-end=\"5331\"><span style=\"color: #000000;\">But once we begin delegating <em data-start=\"5000\" data-end=\"5010\">judgment<\/em> \u2014 in education, journalism, law, policy, or healthcare \u2014 we must be clear about what we are outsourcing: not understanding, but a plausible simulation of it. The study shows that this simulation can be highly convincing, yet systematically diverges where it matters most: bias, contextual awareness, and normative depth.<\/span><\/p>\n<h2 data-start=\"5338\" data-end=\"5348\"><span style=\"color: #000000;\">Closing<\/span><\/h2>\n<p data-start=\"5350\" data-end=\"5498\"><span style=\"color: #000000;\">The challenge is not technological &#8211; models will inevitably improve &#8211; but the challenge is human: do we remain capable of recognizing what AI is <em data-start=\"5492\" data-end=\"5497\">not<\/em>?<\/span><\/p>\n<p data-start=\"5791\" data-end=\"5944\"><strong><span style=\"color: #000000;\">The question is thus not whether artificial intelligence will become smart enough. The question is whether we will remain wise enough to see the difference.<\/span><\/strong><\/p>\n<p data-start=\"5791\" data-end=\"5944\"><span style=\"color: #000000;\">*) Proceedings of the National Academy of Sciences biedt hier zeldzame helderheid: Loru, E. et al. (2025). The simulation of judgment in LLMs. PNAS 122(42): e2518443122. https:\/\/doi.org\/10.1073\/pnas.2518443122<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The desire to mechanize thinking is ancient. Greek myths tell of Hephaestus\u2019 golden automatons. The Jewish golem was animated through letters. In the Middle Ages, Ramon Llull dreamed of logical machines; Leibniz imagined a calculus ratiocinator capable of resolving disputes through symbols. Descartes and La Mettrie viewed humans themselves as machines.<\/p>\n<p>In 1950, Alan Turing reframed this dream pragmatically: can a machine exhibit behavior indistinguishable from that of a human? Intelligence was reduced to imitable behavior.<\/p>\n","protected":false},"author":3,"featured_media":86710,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[293,72],"tags":[297,682,898,933,934,935,936,937,938],"class_list":["post-86702","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence","category-digitalisation","tag-machinelearning","tag-artificialintelligence","tag-humanintelligence","tag-epistemology","tag-aiandsociety","tag-technologyethics","tag-criticalthinking","tag-digitalphilosophy","tag-aijudgment"],"jetpack_featured_media_url":"https:\/\/i0.wp.com\/hanstimmerman.me\/wp-content\/uploads\/2026\/01\/ecliptic-graphic-_jg8xh2SsXQ-unsplash-scaled.jpg?fit=2560%2C1440&ssl=1","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/hanstimmerman.me\/nl_nl\/wp-json\/wp\/v2\/posts\/86702","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hanstimmerman.me\/nl_nl\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hanstimmerman.me\/nl_nl\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hanstimmerman.me\/nl_nl\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/hanstimmerman.me\/nl_nl\/wp-json\/wp\/v2\/comments?post=86702"}],"version-history":[{"count":12,"href":"https:\/\/hanstimmerman.me\/nl_nl\/wp-json\/wp\/v2\/posts\/86702\/revisions"}],"predecessor-version":[{"id":86715,"href":"https:\/\/hanstimmerman.me\/nl_nl\/wp-json\/wp\/v2\/posts\/86702\/revisions\/86715"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/hanstimmerman.me\/nl_nl\/wp-json\/wp\/v2\/media\/86710"}],"wp:attachment":[{"href":"https:\/\/hanstimmerman.me\/nl_nl\/wp-json\/wp\/v2\/media?parent=86702"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hanstimmerman.me\/nl_nl\/wp-json\/wp\/v2\/categories?post=86702"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hanstimmerman.me\/nl_nl\/wp-json\/wp\/v2\/tags?post=86702"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}