{"id":80202,"date":"2025-02-04T17:55:23","date_gmt":"2025-02-04T17:55:23","guid":{"rendered":"https:\/\/www.noemamag.com"},"modified":"2025-06-12T00:36:25","modified_gmt":"2025-06-12T00:36:25","slug":"why-ai-is-a-philosophical-rupture","status":"publish","type":"wpm-article","link":"https:\/\/www.noemamag.com\/why-ai-is-a-philosophical-rupture","title":{"rendered":"Why AI Is A Philosophical Rupture"},"content":{"rendered":"<p>Tobias Rees, founder of an AI studio located at the intersection of philosophy, art and technology, sat down with Noema Editor-in-Chief Nathan Gardels to discuss the philosophical significance of generative AI.<\/p><p><strong>Nathan Gardels: <\/strong>What remains unclear to us humans is the nature of machine intelligence we have created through AI and how it changes our own understanding of ourselves. What is your perspective as a philosopher who has contemplated this issue not from within the Ivory Tower, but \u201cin the wild,\u201d in the engineering labs at Google and elsewhere?<\/p><p><strong>Tobias Rees: <\/strong>AI profoundly challenges how we have understood ourselves.<\/p><p>Why do I think so?<\/p><p>We humans live by a large number of conceptual presuppositions. We may not always be aware of them \u2014 and yet they are there and shape how we think and understand ourselves and the world around us. Collectively, they are the logical grid or architecture that underlies our lives.<\/p><p>What makes AI such a profound philosophical event is that it defies many of the most fundamental, most taken-for-granted concepts \u2014 or philosophies \u2014 that have defined the modern period and that most humans still mostly live by. It literally renders them insufficient, thereby marking a deep caesura.<\/p><p>Let me give a concrete example. One of the most fundamental assumptions of the modern period has been that there is a clear-cut distinction between us humans and machines.<\/p><p>Here humans, living organisms; open and evolving; beings that are equipped with intelligence and, thus, with interiority.<\/p><p>There machines, lifeless, mechanical things; closed, determined and deterministic systems devoid of intelligence and interiority.<\/p><p>This distinction, which first surfaced in the 1630s, was constitutive of the modern notion of what it is to be human. For example, almost the entire vocabulary that was invented between the 17th and 19th centuries to capture what it truly is to be human was grounded in the human\/intelligence-machine\/mechanism distinction.<\/p><p>Agency, art, creativity, consciousness, culture, existence, freedom, history, knowledge, language, morals, play, politics, society, subjectivity, truth, understanding. All of these concepts were introduced with the explicit purpose of providing us with an understanding of what is truly unique human potential, a uniqueness that was grounded in the belief that intelligence is what lifts us above everything else \u2014 and that everything else ultimately can be sufficiently described as a closed, determined mechanical system.<\/p><p>The human-machine distinction provided modern humans with a scaffold for how to understand themselves and the world around them. The philosophical significance of AIs \u2014 of built, technical systems that are intelligent \u2014 is that they break this scaffold.<\/p><p>What that means is that an epoch that was stable for almost 400 years comes \u2014 or appears to come \u2014 to an end.<\/p><p>Poetically put, it is a bit as if AI releases ourselves and the world from the understanding of ourselves and the world we had. It leaves us in the open.<\/p><p>I am adamant that those who build AI understand the philosophical stakes of AI. That is why I became, as you put it, a philosopher in the wild.<\/p><p><strong>Gardels: <\/strong>You say that AI is intelligent. But many people doubt that AI is \u201creally\u201d intelligent. They view it as just another tool like all previous human-invented technologies.<\/p><p><strong>Rees: <\/strong>In my experience, this question is almost always grounded in a defensive impulse. A sometimes angry, sometimes anxious effort to hold on to or to re-inscribe the old distinctions. I think of it as a nostalgia for human exceptionalism, that is, a longing for a time when we humans thought there was only one form of intelligence, us.<\/p><p>AI teaches us that this is not so. And not just AI, of course. Over the last two decades or so the concept of intelligence has multiplied. We now know that there are lots of other kinds of intelligence: from bacteria to octopi, from Earth systems to the spiral arms of galaxies. We are an entry in a series. And so is AI.<\/p><p>To argue that these other things are not &#8220;really&#8221; intelligent because their intelligence differs from ours is a bit silly. That would be like one species of birds, say Pelicans, insisting that only Pelicans \u201creally\u201d know how to fly.<\/p><p>It is best if we get rid of the \u201creally\u201d and simply acknowledge that AI <em>is<\/em> intelligent, if in ways slightly different from us.<\/p><p><strong>Gardels: <\/strong>What is intelligence?<\/p><p><strong>Rees: <\/strong>Today, we appear to know that there are some baseline qualities to intelligence such as learning from experience, logical understanding and the capability to abstract from what one has learned to solve novel situations.<\/p><p>AI systems have all these qualities. They learn, they logically understand and they form abstractions that allow them to navigate new situations.<\/p><p>However, what experience or learning or understanding or abstraction means for an AI system and for us humans is not quite the same. That is why I suggested that AI is intelligently slightly different from us.<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;AI defies many of the most fundamental, most taken-for-granted concepts \u2014 or philosophies \u2014 that have defined the modern period and that most humans still mostly live by.&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/80202\"\n        data-a2a-title='\"AI defies many of the most fundamental, most taken-for-granted concepts \u2014 or philosophies \u2014 that have defined the modern period and that most humans still mostly live by.\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p><strong>Gardels: <\/strong>AI may be another kind of intelligence, but can we say it is, or can be, smarter than us?<\/p><p><strong>Rees: <\/strong>For me, the question is not necessarily whether or not AI is smarter than us, but whether or not our different intelligences can be complementary. Can we be smarter together?<\/p><p>Let me sketch some of the differences I am seeing.<\/p><p>AI can operate on scales \u2014 both micro and macro \u2014 that are beyond human logical comprehension and capability.<\/p><p>For example, AI has much more information available than we do and it can access and work through this information faster than we can. It also can discover logical structures in data \u2014 patterns \u2014 where we see nothing.<\/p><p>Perhaps one must pause for a moment to recognize how extraordinary this is.<\/p><p>AI can literally give us access to spaces that we, on our own, qua human, cannot discover and cannot access. How amazing is this? There are already many examples of this. They range from discovering new moves in games like Go or Chess to discovering how protein folds to understanding whole Earth systems.<\/p><p>Given these more than human qualities one could say that AI is smarter than us.<\/p><p>However, human smartness is not reducible to the kind of intelligence or smartness AI has. It has additional dimensions, ones that AI seems to not have.<\/p><p>The perhaps most important of these additional dimensions is our individual need to live a human life.<\/p><p>What does that mean? At the very least it means that we humans navigate the outside world in terms of our inside worlds. We must orient ourselves by way of thinking, in terms of a thinking self. These thinking selves must understand, make sense of, and be struck by, insights.<\/p><p>No matter how smart AI, is it cannot be smart for me. It can provide me with information, it can even engage me in a thought process, but I still need to orient myself in terms of my thinking. I still need to have my own experiences and my own insights, insights that enable me to live my life.<\/p><p>That said, AI, the specific non-human smartness it has, can be incredibly helpful when it comes to leading a human life.<\/p><p>The most powerful example I can think of is that it can make the self visible to itself in ways we humans cannot.<\/p><p>Imagine an on-device AI system \u2014 an AI model that exists only on your devices and is not connected to the internet \u2014 that has access to all your data. Your emails, your messages, your documents, your voice memos, your photos, your songs, etc.<\/p><p>I stress on-device because it matters that no third parties have access to your data.<\/p><p>Such an AI system can make me visible to myself in ways neither I nor any other human can. It literally can lift me above me. It can show me myself from outside of myself, show me the patterns of thoughts and behaviors that have come to define me. It can help me understand these patterns and it can discuss with me whether they are constraining me, and if so, then how. What is more, it can help me work on those patterns and, where appropriate, enable me to break from them and be set free.<\/p><p>Philosophically put, AI can help me transform myself into an \u201cobject of thought\u201d to which I can relate and on which I can work.<\/p><p>The work of the self on the self has formed the core of what Greek philosophers called melet\u0113 and Roman philosophers meditatio. And the kind of AI system I evoke here would be a philosopher\u2019s dream. It could make us humans visible to ourselves in ways no human interlocutor can, from outside of us, free from conversational narcissism.<\/p><p>You see, there can be incredible beauty in the overlap and the difference between our intelligence and that of AI.<\/p><p>Ultimately, I do not think of AI as a self-enclosed, autonomous entity that is in competition with us. Rather, I think of it as a relation.<\/p><p><strong>Gardels:<\/strong> What is specifically new that distinguishes deep learning-based AI systems from the old human\/machine dichotomy?<\/p><p><strong>Rees: <\/strong>The kind of AI that ruled from the 1950s to the early 2000s was an attempt to think about the human from within the vocabulary provided by machines. It was an explicit, self-conscious attempt by engineers to explain all things human from within the conceptual space of the possibility of machines.<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;AI could make us humans visible to ourselves in ways no human interlocutor can, from outside of us, free from conversational narcissism.&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/80202\"\n        data-a2a-title='\"AI could make us humans visible to ourselves in ways no human interlocutor can, from outside of us, free from conversational narcissism.\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>It was called &#8220;symbolic AI&#8221; because the basic idea behind these systems was that we could store knowledge in mathematical symbols and then equip computers with rules for how to derive relevant answers from those symbolic representations.<\/p><p>Some philosophers, most famously Herbert Dreyfus and John Searle, were very much provoked by this. They set out to defend the idea that humans are more than machines, more than rule-based algorithms.<\/p><p>But the kind of AI that has risen to prominence since the early 2010s, so-called deep learning systems or deep neural networks, are of an altogether different kind.<\/p><p>Symbolic AI systems, like all prior machines, were closed, determined systems. That means, first, that they were limited in what they could do by the rules we gave them. When they encountered a situation that was not covered by the rules, they failed. Let\u2019s say they had no adaptive, no learning behavior. And it means as well that what they could do was entirely reducible to the engineers who built them. They could, ultimately, only do things we had explicitly instructed them to do. That is, they had no agency, no agentive capabilities of their own. In short, they were tools.<\/p><p>With deep learning systems, this is different. We do not give them their knowledge. We do not program them. Rather, they learn on their own, for themselves, and, based on what they have learned, they can navigate situations or answer questions they have never seen before. That is, they are no longer closed, deterministic systems.<\/p><p>Instead they have a sort of openness and a sort of agentive behavior, a deliberation or decision-making space, that no technical system before them ever had. Some people say AI has \u201conly\u201d pattern recognition. But I think pattern recognition is actually a form of discovering the logical structure of things. Roughly, when you have a student who identifies the logical principles that underlie data and who can answer questions based on these logical principles, wouldn\u2019t you call that understanding?<\/p><p>In fact, one can push that a step further and say that AI systems appear to be capable of distinguishing truths from falsehoods. That\u2019s because truth is positively correlated with a consistent logical structure. Errors, so to speak, are all unique or different. While the truth is not. And what we see in AI models is that they can distinguish between statements that conform to the patterns that they discover and statements that don\u2019t.<\/p><p>So in that sense, AI systems have a nascent sense of truth.<\/p><p>Simply put, deep learning systems have qualities that, up until recently, were considered possible only for living organisms in general and for humans in particular.<\/p><p><\/p><p>Today\u2019s AI systems have qualities of both \u2013\u2013 and, thereby, are reducible to neither. They exist in between the old distinctions and show that the either-or logic that organized our understanding of reality \u2013\u2013 either human or machine, either alive or not, either natural or artificial, either being or thing \u2013\u2013 is profoundly insufficient.<\/p><p>Insofar as AI escapes these binary distinctions, it leads us into a terrain for which we have no words.<\/p><p>We could say, it opens up the world for us. It makes reality visible to us in ways we have never seen before. It shows us that we can understand and experience reality and ourselves in ways that lie outside of the logical distinctions that organized the modern period.<\/p><p>In some sense, we can see as if for the first time.<\/p><p><strong>Gardels: <\/strong>So, deep-learning systems are not just tools, but agents with a degree of autonomy?<\/p><p><strong>Rees: <\/strong>This question is a good example to showcase that AI is indeed philosophically new.<\/p><p>We used to think that agency has two prerequisites, being alive and having interiority, that is, a sense of self or consciousness. Now, what we can learn from AI systems is that this is apparently not the case. There are things that have agency but that are not alive and that do not have consciousness or a mind, at least not in the way we have previously understood these terms.<\/p><p>This insight, this decoupling of agency from life and from interiority, is a powerful invitation to see the world \u2014 and ourselves \u2014 differently.<\/p><p>For example, is what is true for agency \u2014 that it doesn\u2019t need life and interiority \u2014 also true for things like intelligence, creativity or language? And how would we classify or categorize things in the world differently if this were the case?<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;What makes AI a philosophical event is that these systems defy the formerly clear-cut distinction between humans and machines or between living things and nonliving things.&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/80202\"\n        data-a2a-title='\"What makes AI a philosophical event is that these systems defy the formerly clear-cut distinction between humans and machines or between living things and nonliving things.\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>In her <a href=\"https:\/\/www.noemamag.com\/ai-is-life\/\">essay<\/a> in Noema, the astrophysicist Sara Walker said that \u201cwe need to get past our binary categorization of all things as either life or not.\u201d<\/p><p>What interests me most is rethinking the concepts we have inherited from the modern period, from the perspective of the <em>in-betweenness<\/em> made visible to us by AI.<\/p><p>What is creativity from the perspective of the in-betweenness of AI? What language? What mind?<\/p><h2 class=\"wp-block-heading has-text-align-center\" id=\"h-ii-a-new-aixial-age\"><strong>II. A New AIxial Age?<\/strong><\/h2><p><strong>Gardels: <\/strong>Karl Jaspers was best known for his study of the so-called Axial Age when all the great religions and philosophies were born in relative simultaneity over two millennia ago \u2014 Confucianism in China, the Upanishads and Buddhism in India, Homer\u2019s Greece and the Hebrew prophets. Jaspers saw these civilizations arising in the long wake of what he called \u201cthe first Promethean Age\u201d of man\u2019s appropriation of fire and earliest inventions.<\/p><p>For Charles Taylor, the first Axial Age resulted from the \u201cgreat dis-embedding\u201d of the person from isolated communities and their natural environment, where circumscribed awareness had been limited to the sustenance and survival of the tribe guided by oral narrative myth. The lifting out from a closed-off world, according to Taylor, was enabled by the arrival of written language. This attainment of symbolic competency capacitated an \u201cinteriority of reflection\u201d based on abiding texts that created a platform for shared meanings beyond one\u2019s immediate circumstances and local narratives.<\/p><p>Long story very short, this \u201ctranscendence\u201d in turn led to the possibility of general philosophies, monotheistic religions and broad-based ethical systems. The critical self-distancing element of dis-embedded reflection further evolved into what the sociologist Robert Bellah called \u201ctheoretic culture,\u201d to scientific discovery and the Enlightenment that spawned modernity. For Bellah, \u201cPlato completed the transition to the Axial Age,\u201d with the idea of&nbsp;theoria&nbsp;that \u201cenables the mind to \u2018view\u2019 the great and the small in themselves abstracted from their concrete manifestations.\u201d<\/p><p>The big question is whether the new level of symbolic competence reached by AI will play a similar role in fostering a \u201cNew AIxial Age\u201d as written language did the first time around, when it gave rise to new philosophies, ethical systems and religions.<\/p><p><strong>Rees: <\/strong>I am not sure today\u2019s AI systems have what the modern period came to call symbolic competence.<\/p><p>That is related to what we\u2019ve already discussed.<\/p><p>There was, ever since John Locke, the idea that we humans have a mind in which we store experiences in the form of symbols or symbolic representations and then we derive answers from these symbols.<\/p><p>Let\u2019s say this conceptualization was understood throughout the modern period to be the basic infrastructure of intelligence.<\/p><p>In the late 19th century, philosophers like Ernst Cassirer gave this a twist. He suggested that the key to understanding what it is to be human is to see that we humans invent symbols or meaning and that symbol-making or meaning-making is what sets us apart as a species from everything else.<\/p><p>Deep learning, in general, and generative AI in particular, have broken with this human-centric concept of intelligence and replaced it with something else: The idea that intelligence is pretty much two things: learning and reasoning.<\/p><p>Essentially, learning means the capacity to discover abstract logical principles that organize the things we want to learn. Whether this is an actual data set or learning experiences that we humans make, there is no difference. Call it logical understanding.<\/p><p>The second defining feature of intelligence is the capacity to continuously and steadily refine and update these abstract logical principles, these understandings, and to apply them \u2013\u2013 by way of reasoning \u2013\u2013 to situations we live in and that we must navigate or solve.<\/p><p>Deep learning systems are most excellent at the first part \u2013\u2013 but not so much the second. Basically, once they are trained, they cannot revise the things they have learned. They can only infer.<\/p><p>Be that as it may, there is nothing much symbolic here. At least not in the classical sense of the term.<\/p><p>I am emphasizing this absence of the symbolic because it is a beautiful way to show that deep learning has led to a pretty powerful philosophical rupture: Implicit in the new concept of intelligence is a radically different ontological understanding of what it is to be human, indeed, of what reality is or of how it is structured and organized.<\/p><p>Understanding this rupture with the older concept of intelligence and ontology of the human\/the world is key, I think, to understanding your actual question: Are we entering what you call a new AIxial age, where AI will amount to something similar to what writing amounted to roughly 3,000 to 2,000 years ago?<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;Are we entering what you call a new AIxial age, where AI will amount to something similar to what writing amounted to roughly 3,000 to 2,000 years ago?&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/80202\"\n        data-a2a-title='\"Are we entering what you call a new AIxial age, where AI will amount to something similar to what writing amounted to roughly 3,000 to 2,000 years ago?\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>If we are lucky, the answer is yes. The potential is absolutely there.<\/p><p>But let me try to articulate what I think the challenge is so we truly can make this possible.<\/p><p>Let\u2019s take the correlation between the emergence of writing, the birth of a vocabulary of interiority, and the rise of abstract or theoretical thought as our starting point.<\/p><p>I will do what I tried to do in my prior responses: Reflect on the historicity of the concepts we live by, point out how recent they are, that there is nothing timeless or universal about them, and then ask if AI challenges and changes them.<\/p><p>There is a beautiful book by Bruno Snell called \u201cDie Entdeckung des Geistes\u201d or, in an excellent English translation, \u201cThe Discovery of the Mind.\u201d<\/p><p>The work&#8217;s central thesis is that what we today call \u201cmind,\u201d \u201cconsciousness\u201d and \u201cinner life\u201d is not a given. It is nothing that has always existed or was always experienced. Instead, it is a concept that only gradually emerged.<\/p><p>In beautiful, captivating prose Snell traces the earliest instances of the birth of what I think of as \u201ca vocabulary of interiority.\u201d<\/p><p>For example, he shows that in Homer&#8217;s works, there is no general, abstract concept of \u201cmind\u201d or \u201csoul.\u201d Instead, there is a whole flurry of terms that are very difficult to translate. For example, <em>thymos<\/em>, which is perhaps best articulated as a passion that overcomes and consumes one, or <em>noos<\/em>, which originally meant sensory awareness and psyche, is a term that Homer and his contemporaries most often meant \u201cbreath\u201d or that which animates, but not what we would call psyche today.<\/p><p>Simply put, there is absolutely no vocabulary of interiority in Homer. Or in Hesiod.<\/p><p>This changes at the turn from Archaic to Classical Greek. We begin to see the birth of a vocabulary of interiority and increasingly sophisticated ways of describing inner experience. The most important reference here is probably Sappho. Her poetry is among the very first explorations of what we today would call subjective experience and individual emotion.<\/p><p>I do not want to derail us by retelling the whole of Snell\u2019s book. Rather, what interests me is to convey a sense of the possibility that we discussed earlier: We humans have not always experienced ourselves the way we do today. Every form of experience and thinking or understanding is conceptually mediated. This is also true, perhaps particularly so, for the idea of interiority and inner life.<\/p><p>Snell\u2019s book is so wonderful because he shows the discontinuous, gradual emergence of new concepts that amount to the idea that there is something like an interiority and that this interiority \u2014 a kind of inner landscape \u2014 is where a single, self-identical \u201cI\u201d is located.<\/p><p>Now, what is crucial, is that the introduction of writing, which probably began right at the time of Homer, was key for the emergence of a <em>conceptual vocabulary<\/em> of interiority.<\/p><p>Snell touches on this only in passing, but later works, especially by Jack Goody, Eric Havelock and Walter Ong, have attended to this explicitly and all have more or less come to the same conclusion: The practice of writing created new possibilities for analytical thinking that led to increasingly abstract, classificatory nouns and to a form of systematic search and production of knowledge that was not seen anywhere in human history before.<\/p><p>These authors also made clear that the only unfortunate thing about Snell\u2019s work is his use of the term \u201cdiscovery\u201d in his title. The mind was not discovered. It was constituted, invented, if you will. That is, it could have been constituted differently. And that is what Goody, Ong and others have amply shown. What mind is, what interiority is, is different in other places.<\/p><p>Let me summarize this simply by saying that the technology of writing had absolutely dramatic consequences for what it is to be human, for how we experience and understand ourselves as humans. Among the two, perhaps, most important of these consequences was <em>the systematic emergence of self-reflection and abstract thought.<\/em><\/p><p>Can AI play as transformative a role in what it means to be human as it did for writing?<\/p><p>Can AI mark the beginning of a whole new, perhaps radically discontinuous chapter for what it is to have a mind, to have interiority, to think? Can it help us think thoughts that are so new and so different that however we understood ourselves up until now become obsolete?<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;Can AI mark the beginning of a whole new, perhaps radically discontinuous chapter for what it is to have a mind, to have interiority, to think?&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/80202\"\n        data-a2a-title='\"Can AI mark the beginning of a whole new, perhaps radically discontinuous chapter for what it is to have a mind, to have interiority, to think?\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>Oh yes, it can! AI absolutely has the potential to be such a major philosophical event.<\/p><p>The perhaps most beautiful, most fascinating and eye-opening way to show this potential of AI is what engineers call \u201clatent space representations.\u201d<\/p><p>When a large language model learns, it gradually distills ever more abstract logical principles from the data it is provided with.<\/p><p>It is best to think of this process as roughly similar to a structuralist analysis: The AI identifies the logical structure that organizes \u2014 that literally underlies \u2014 the totality of the data it is trained on and stores or memorizes it in the form of concepts. The way it does this is that it discovers the logic of the relations between different elements of the data. So, in text, roughly, that would be the words: What is the closeness between the different words in the training data?<\/p><p>If you will, an LLM discovers the many different degrees of relations between words.\n          <div class=\"eos-subscribe-push\">\n            \n            <a target=\"https:\/\/shop.noemamag.com\/?utm_source=MiddleCTA&utm_medium=website\" href=\"https:\/\/shop.noemamag.com\/?utm_source=MiddleCTA&utm_medium=website\" data-wpel-link=\"internal\">Read Noema in print.<\/a>\n            \n          <\/div>\n        <\/p><p>Fascinatingly, what emerges from this learning process is a high-dimensional, relational space that engineers call latent \u2014 in the sense of hidden \u2014 space.<\/p><p>First, this means that something grows on the inside of an LLM during training. A hidden map of the logic of relations between words that the AI successively discovers. I say on the inside because we humans cannot observe this map from the outside.<\/p><p>The second thing it means is that this map is not just a list but a spatial arrangement.<\/p><p>Imagine a three-dimensional point cloud where each point stands for a word and where the distance between points reflects how close or far words are from one another in the training data.<\/p><p>It is just, and this is the third thing, that this spatial map doesn\u2019t have only the three dimensions \u2014 length, width, depth \u2014 our conscious human mind is comfortable operating in. Instead, it has many, many more dimensions. Tens of thousands and with the latest models, perhaps millions.<\/p><p>That is, the understanding an LLM has formed is a spatial architecture. It has a geometry that literally determines what, for an LLM, is thinkable.<\/p><p>It is literally the logical condition of possibility \u2014 the a priori \u2014 of the LLM.<\/p><p>For all we know, human brains also create latent space representations. The neurons in our brain work in a very similar fashion to how neurons work in a neural network.<\/p><p>Yet, despite this similarity, it appears that the latent space representations that a human brain produces and the latent space representations that an AI <em>can<\/em> produce are different from one another.<\/p><p>The two latent space representations likely overlap but they also differ significantly in kind and quality because of AI\u2019s far greater dimensional scope.<\/p><p>Now imagine we could build AI so that the logic of possibility that defines the human brain gets extra latent spaces.<\/p><p>Imagine we built AI to add to our human mind logical spaces of possibility that we humans could travel but not produce on our own. The consequence would be that we humans could discover truths and think things that no human could have ever thought before AI. In this case, no one knows where the human mind might end and AI might begin.<\/p><p>We could take any theme and approach it from whole new perspectives. Imagine what this kind of co-cogitation between humans and AI would do to our current concept of interiority! Can you imagine what it would do to how we understand terms like mind, thought, having an idea or being creative?<\/p><p>As I outline this vision, I can hear the critical voices. They tell me that I make AI sound like a philosophical project while the companies building AI have very different motives.<\/p><p>I am entirely aware that I am giving AI philosophical and poetic dignity. And I do so consciously because I think AI has the potential to be an extraordinary philosophical event. It is our task as philosophers, artists, poets, writers and humanists to render this potential visible and relevant.<\/p><p>All this certainly has the makings of a new pivotal age.<\/p><p><strong>Gardels: <\/strong>To grasp how deep learning through what AI scientists call backpropagation \u2014 the feeding of new information through the artificial neural networks of logical structures \u2014 could lead to interiority and intention, it might be useful to look at an analogy from the materialist view of biology about how consciousness arises. The core issue here is whether disembodied intelligence can mimic embodied intelligence through deep learning.<\/p><p>Where does AI depart from, and where is it similar to the neural Darwinism described here by Gerald Edelman, the Nobel Prize-winning neuroscientist? What Edelman refers to as \u201creentrant interaction\u201d appears quite similar to \u201cbackpropagation.\u201d<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;Imagine we built AI to add to our human mind logical spaces of possibility that we humans could travel but not produce on our own.&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/80202\"\n        data-a2a-title='\"Imagine we built AI to add to our human mind logical spaces of possibility that we humans could travel but not produce on our own.\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>According to Edelman, \u201cCompetition for advantage in the environment enhances the spread and strength of certain synapses, or neural connections, according to the \u2018value\u2019 previously decided by evolutionary survival. The amount of variance in this neural circuitry is very large. Certain circuits get selected over others because they fit better with whatever is being presented by the environment. In response to an enormously complex constellation of signals, the system is self-organizing according to Darwin\u2019s population principle. It is the activity of this vast web of networks that entails consciousness by means of what we call \u2018reentrant interactions\u2019 that help to organize \u2018reality\u2019 into patterns.<\/p><p>The thalamocortical networks were selected during evolution because they provided humans with the ability to make higher-order discriminations and adapt in a superior way to their environment. Such higher-order discriminations confer the ability to imagine the future, to explicitly recall the past and to be conscious of being conscious.<\/p><p>Because each loop reaches closure by completing its circuit through the varying paths from the thalamus to the cortex and back, the brain can \u2018fill in\u2019 and provide knowledge beyond that which you immediately hear, see or smell. The resulting discriminations are known in philosophy as qualia. These discriminations account for the intangible awareness of mood, and they define the greenness of green and the warmness of warmth. Together, qualia make up what we call consciousness.\u201d<\/p><p><strong>Rees: <\/strong>There are neural processes happening in AI systems that are similar \u2014 but not the same \u2014 as in humans.<\/p><p>It seems likely that there is some form of backpropagation in the brain. And we just talked about the fact that both biological neural networks and artificial neural networks build latent space representations. And there is more.<\/p><p>But I do not think that makes them have interiority or intentionality in the way we have come to understand these terms.<\/p><p>In fact, I think the philosophical significance of AI is that it invites us to reconsider the way we previously understood these terms.<\/p><p>And the close connection between backpropagation and reentry that you observe is a great example of that.<\/p><p>The person who did perhaps more than anyone to make the concepts of backpropagation accessible and widely known was David Rumelhart, a very influential psychologist and cognitive scientist who, like Edelman, lived and worked in San Diego.<\/p><p>Both Rumelhart and Edelman were key people in the <a href=\"https:\/\/plato.stanford.edu\/entries\/connectionism\/\">connectionism school<\/a>. I say this because I think the theoretical impulse between reentry and backpropagation is almost identical: the effort to develop a conceptual vocabulary that allows us to undifferentiate the biological and artificial neural networks in order to understand the brain better and in order to build better neural networks.<\/p><p>Some have suggested that the work of the connectionists was an attempt to think about the brain in terms of computers \u2013\u2013 but one could just as well say it was an attempt to think about computers or AI in terms of biology.<\/p><p>At base, what matters was the invention of a vocabulary that didn\u2019t need to make distinctions.<\/p><p>There is a space in the middle, an overlap.<\/p><p>It is very difficult to overemphasize how powerful this kind of conceptual work has been over the last 40 years.<\/p><p>Arguably, the work of people like Rumelhart and Edelman has led to a concept of intelligence that can be described in a substrate-independent manner. And these concepts are not just theoretical concepts but concrete engineering possibilities.<\/p><p>Does this mean that human brains and AI are the same thing?<\/p><p>Of course not. Are birds, planes and drones all the same thing? No, but they all make use of the general laws of aerodynamics. And the same may be true for brains. The material infrastructure of intelligence is very different \u2014 but some of the principles that organize these infrastructures may be very similar.<\/p><p>In some instances, we likely will want to build AI systems similar to human brains. But in many cases, I presume, we do not. What makes AI attractive, in my thinking, is that we can build intelligent systems that do not yet exist \u2014 but that are perfectly possible.<\/p><p>I often think of AI as a kind of very early-stage experimental embryology. Indeed, I often think that AI is doing for intelligence what synthetic biology did for nature. Meaning, synthetic biology transformed nature into a vast field of possibility. The number of things that exist in nature is minuscule compared to the things that could exist in nature. In fact, many more things have existed in the course of evolution than there are now, and there is no reason why we can\u2019t combine strands of DNA and make new things. Synthetic biology is the field of practice that can bring these possible things into existence.<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;What makes AI attractive, in my thinking, is that we can build intelligent systems that do not yet exist \u2014 but that are perfectly possible.&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/80202\"\n        data-a2a-title='\"What makes AI attractive, in my thinking, is that we can build intelligent systems that do not yet exist \u2014 but that are perfectly possible.\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>The same is true for AI and intelligence. Today, intelligence is no longer defined by a single or a few instances of existing intelligences but by the very many intelligent things that could exist.<\/p><p><strong>Gardels: <\/strong>Back in the 1930s, much of philosophy from Heidegger to Carl Schmitt was against an emergent technological system that alienated humans from \u201cbeing.\u201d As Schmitt put it back then, \u201ctechnical thinking is foreign to all social traditions; the machine has no tradition. One of Karl Marx\u2019s seminal sociological discoveries is that technology is the true revolutionary principle, besides which all revolutions based on natural law are antiquated forms of recreation. A society built exclusively on progressive technology would thus be nothing but revolutionary; it would soon destroy itself and its technology.\u201d As Marx put it, \u201call that is solid melts into air.\u201d<\/p><p>Does the nature of AI make Schmitt\u2019s perspective obsolete, or is it simply a fulfillment of his perspective?<\/p><p><strong>Rees: <\/strong>I think the answer \u2014 and I take that to be very good news \u2014 is yes, it makes Schmitt\u2019s perspective obsolete.<\/p><p>Let me first say something about Schmitt. He was essentially apocalyptic in his thinking.<\/p><p>Like all apocalyptic thinkers, he had a more or less definite, ontological and in his case also religious, worldview. Everything in his world had a definite, metaphysical meaning. And he thought the modern, liberal world, the world of the Enlightenment, was out there to destroy the timeless, ultimately, divine order of things. What is more, he thought that when this happened, all hell would break loose, and the end of the world would begin to unfold.<\/p><p>The lines that you quote illustrate this. On the one hand the modern, Enlightenment period, the factory, technology, substanceless, the relativizing quality of money, etc. \u2014 and, on the other hand, social, that is, racially defined national traditions, images and symbols.<\/p><p>Schmitt was worried that the liberal order would de-substantize the world. Everything would become relative. And at least if we go by his writings, he thought that Jews were one of the key driving forces of this de-substantification of the world. Famously, Schmitt was a rabid antisemite.<\/p><p>He was so worried about the end of the world that he aligned himself with Hitler and the Nazis and their agendas.<\/p><p>From today\u2019s perspective, of course, it is obvious that the ones who embraced modern technology to de-substantize humans, to deprive them of their humanity and to murder them on an industrial scale, were the Nazis.<\/p><p>It is difficult to suppress a comment on Heidegger here, who sought to \u201cdefend being against technology.\u201d That said, I think there are important differences between the two.<\/p><p>But let me go to the second part of my reply, why I think AI renders his world obsolete.<\/p><p>AI has proven that the either-or logic at the core of Schmitt\u2019s thinking doesn\u2019t hold. One example of this is provided by Schmitt\u2019s curious appropriation of Marx.<\/p><p>Famously, Marx described the rise of industry enabled by the combustion engine as a dehumanizing event. Before capitalists discovered how they could use the combustion engine to fabricate goods, most goods were made in artisanal sweatshops. Maybe these sweatshops were harsh places. But, or so Marx suggests, they were also places of human dignity and virtuosity.<\/p><p>Why? Well, because at the center of these sweatshops were humans who used tools. As Marx saw it, tools are nothing in themselves. What one can do with a tool depends entirely on the imagination and the virtuosity of the human who uses it.<\/p><p>With the combustion engine, everything changed. It gave rise to factories in which goods were made by machines rather than by artisans. However, the machines were not entirely autonomous. They needed humans to assist them. That is, what the machines needed were not artisans. What they needed was not human imagination and virtuosity. On the contrary, what was needed were humans that could function as extensions of the machine. That made these humans mindless and reduced them to mere machines.<\/p><p>That is why Marx described the machine as the \u201cother\u201d of the human and the factory as the place where humans are deprived of their own humanity.<\/p><p>Schmitt appropriated this for his own argument to juxtapose his kind of substance thinking with the modern, technical world. The net outcome is that you now have a juxtaposition timeless, substantive, metaphysical truth on the one hand \u2014 and, on the other, the modern world of machines, of technology, of functionality, of relativity of values, of substance-less humans.<\/p><p>Hence, technology, for Schmitt, comes into view as an unnatural violence against the metaphysically timeless and true.<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;The alternative to being against AI is to enter AI and try to show what it could be.&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/80202\"\n        data-a2a-title='\"The alternative to being against AI is to enter AI and try to show what it could be.\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>Schmitt\u2019s distinction was most certainly not timeless but intrinsic to the modern period and deeply indebted to its paradigm of the new machine versus the old human.<\/p><p>The deep-learning-based AI systems we have today defy and escape the \u201ceither-or\u201d distinction of Schmitt \u2014 or of Marx and of Heidegger and all those who come after them.<\/p><p>AI clearly and beautifully shows us that there is a whole world in between these distinctions. A world of things, of which AI is just one, that have some qualities of intelligence and some qualities of machine \u2014 and that are reducible to neither. Things that are at once natural and built.<\/p><p>AI invites us to rethink ourselves and the world from within this in-between.<\/p><p>Let me say that I understand the wish to render human life meaningful. To render thought and intellectual insight critical and, so too, art, creativity, discovery, science and community. I totally get it and share it.<\/p><p>But I think the suggestion that all these things are on the one side, and AI and those who build it are on the other, is somewhat surprising and unfortunate.<\/p><p>A critical ethos grounded in this distinction reproduces the world it says it is against.<\/p><p>The alternative to being against AI is to enter AI and try to show what it could be. We need more in-between people. If my suggestion that AI is an epochal rupture is only modestly accurate, then I don\u2019t really see what the alternative is.<\/p><figure class=\"wp-block-video\"><video autoplay controls loop preload=\"auto\" src=\"https:\/\/s3.us-east-2.amazonaws.com\/assets-noemamag.com\/2025%2F02%2Fliq-1-8.mp4\" playsinline class=\"mcloud-attachment-80206\"><\/video><figcaption class=\"wp-element-caption\">This video, &#8220;overflow,&#8221; was generated with the Limn AI system based on a prompt enticing the AI to categorize an ambiguous drawing. The video is reflective of the AI\u2019s effort to work through its learned categories of representation \u2014 without ever arriving at a stable representation resulting in exploring the hidden spaces between existing categories of representation. (LIMN\/Noema Magazine)<\/figcaption><\/figure><h2 class=\"wp-block-heading has-text-align-center\" id=\"h-iii-in-betweenness-amp-symbiogenesis\"><strong>III. In-Betweenness &amp; Symbiogenesis<\/strong><\/h2><p><strong>Gardels: <\/strong>I\u2019m wondering if there is a correspondence between your \u201cin-betweenness\u201d point and Blaise Ag\u00fcera y Arcas\u2019 idea that evolution advances not only by natural selection but through \u201csymbiogenesis\u201d \u2014 the mutual transformation that conjoins separate entities into one interdependent organism through the transfer of new information, for example, DNA fragments carried by bacteria that are \u201ccopy and pasted\u201d into the cells they penetrate. What results is not either\/or, but something new created by symbiosis.<\/p><p><strong>Rees: <\/strong>I believe Blaise, like me, was influenced by an <a href=\"https:\/\/groups.csail.mit.edu\/medg\/people\/psz\/Licklider.html\">essay<\/a> the American computer scientist <a href=\"https:\/\/en.wikipedia.org\/wiki\/J._C._R._Licklider\">Joseph Licklider<\/a> published in 1960, called, called \u201cMan-Computer Symbiosis.\u201d<\/p><p>This is how the essay begins:<\/p><p>\u201cThe fig tree is pollinated only by the insect <em>Blastophaga grossorun<\/em>. The larva of the insect lives in the ovary of the fig tree, and there it gets its food. The tree and the insect are thus heavily interdependent: the tree cannot reproduce without the insect; the insect cannot eat without the tree; together, they constitute not only a viable but a productive and thriving partnership. This cooperative \u2018living together in intimate association, or even close union, of two dissimilar organisms\u2019 is called \u2018symbiosis.\u2019\u201d<\/p><p>Licklider goes on: \u201cAt present (\u2026) there are no man-computer symbioses. The purposes of this paper are to present the concept and, hopefully, to foster the development of man-computer symbiosis by analyzing some problems of interaction between men and computing machines, calling attention to applicable principles of man-machine engineering, and pointing out a few questions to which research answers are needed. The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.\u201d<\/p><p>What does symbiosis mean? It means that one organism cannot survive without the other, which belongs to a different species. More specifically, it means that one organism is dependent on functions performed by the other organism. More philosophically put, symbiosis means that there is an indistinguishability in the middle. An impossibility to say where one organism ends and the other (or the others) begin.<\/p><p>Is it conceivable that this kind of interdependence will in the future occur between humans and AI?<\/p><p>The traditional answer is: Absolutely not. The old belief is that humans belong to nature and, more specifically, to biology, to living things that can self-reproduce. Computers, on the other hand, belong to a totally different ontological category, the category of artificial, the merely technical. They don\u2019t grow, they are constructed and built. They have neither life nor being.<\/p><p>Symbiosis, in that old way of thinking, is only possible within the realm of nature, between living things. In this way of thinking, there cannot possibly be a human-computer symbiosis.<\/p><p>I think there was also a sense that what Licklider meant was an enrolling of humans into the machine concept. Perhaps like a cyborg. And as humans are supposedly more than or different from machines, that would mean a loss of that which makes us human, of that which sets us apart from machines.<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;AI can have agency, creativity, knowledge, language and understanding without either being alive or being human.&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/80202\"\n        data-a2a-title='\"AI can have agency, creativity, knowledge, language and understanding without either being alive or being human.\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>But as we have discussed, AI renders this old, classical modern distinction between living humans or beings and inanimate machines or things, insufficient.<\/p><p>AI leads us into a territory that lies outside of these old distinctions. If one enters this territory, one can see that things \u2013\u2013 things like AI \u2013\u2013 can have agency, creativity, knowledge, language and understanding without either being alive or being human.<\/p><p>That is, AI affords us with an opportunity to experience the world anew and to rethink how we have thus far organized things in the world, the categories to which we assigned them.<\/p><p>But here is the question: Is human-AI symbiosis possible from within this new, still emergent territory \u2014 this in-between territory \u2014 in the sense of the indistinguishability just described?<\/p><p>I think so. And I am excited about it. A bit like Licklider, I am looking forward to a \u201cpartnership\u201d that will allow us to \u201cthink as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.\u201d<\/p><p>When we can think thoughts we cannot think without AI, and when AI can process data in ways it cannot on its own, then no one can say where humans end and AI begins. Then we have indistinguishability, a symbiosis.<\/p><p>Let me add that what I describe here \u2014 with Licklider \u2014 is not a gradual human dependency on AI, where we outsource all thinking and decision-making to AI until we are barely able to think or decide on our own.<\/p><p>Quite the opposite. I am describing a situation of maximal human intellectual curiosity. A state where being human is being more than human. Where the cognitive boundary between humans and AI becomes meaningfully indistinct.<\/p><p>Is this different, in an ontologically meaningful way, from fungi-tree relationships?<\/p><p>Their relationship is essentially a communication, in which they cogitate together. Neither party can produce or process the information exchanged in this communication alone. The actual processing of the information \u2014 cognition \u2014 happens at the interface between them: Call it symbiosis.<\/p><p>What, if any, is the ontological difference between human-AI symbiosis? I fail to see one.<\/p><p><strong>Gardels: <\/strong>Perhaps such a symbiosis of inorganic and organic intelligence will spawn what Benjamin Bratton calls \u201cplanetary sapience,\u201d where AI helps us better understand natural systems and align with them?<\/p><p><strong>Rees: <\/strong>What if we linked AI to this fungi-tree symbiosis? AI could read and translate chemical and electrical signals from fungi-tree-soil networks. These signals contain information about ecosystem health, nutrient flows, stress responses. That is, AI could make the communication between fungi-trees intelligible to humans in real-time.<\/p><p>We humans could then understand something \u2014 and possibly pose questions and thereby communicate \u2014 that we simply couldn\u2019t otherwise, independent of AI. And simultaneously we can help AI ask the right questions and process information in ways it cannot on its own.<\/p><p>Now let\u2019s expand the scope: What if AI could connect us to large-scale planetary systems that are impossible to know without AI? In fact, what if AI would become something like a self-monitoring planetary system into which we are directly looped. As Bratton has put it, \u201cOnly when intelligence becomes artificial and can be scaled into massive, distributed systems beyond the narrow confines of biological organisms, can we have a knowledge of the planetary systems in which we live.\u201d<\/p><p>Perhaps in a way where \u2014 as DNA is the best storage for information we know \u2014 part of the information storage and the compute the AI relies on is actually done by mycorrhizal networks?<\/p><p>If anything, I can\u2019t wait to have such a whole Earth symbiotic state \u2014 and to be a part of this form of reciprocal communication.<\/p><p><strong>Gardels: <\/strong>What is the first next step to guiding us toward symbiosis between humans and intelligent machines that opens up the possibilities of AI augmenting the human experience as never before?<strong><\/strong><\/p><p><strong>Rees: <\/strong>Ours is a time when philosophical research really matters. I mean, really, really matters.<\/p><p>As we have elaborated in this conversation, we live in philosophically discontinuous times. The world has been outgrowing the concepts we have lived by for some time now.<\/p><p>To some, that is very exciting. To many, however, it is not. The insecurity and confusion are widespread and real.<\/p><p>If history is any guide, we can assume that political unrest will occur, with possibly far-reaching consequences, including autocratic strongmen who try to enforce clinging to the past.<\/p><p>One way to prevent such unfortunate outcomes is to do the philosophical work that can lead to new concepts that allow us all to navigate uncharted pathways.<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;AI could make the communication between fungi-trees intelligible to humans in real-time.&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/80202\"\n        data-a2a-title='\"AI could make the communication between fungi-trees intelligible to humans in real-time.\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>However, the kind of philosophical work that is needed cannot be done in the solitude of ivory towers. We need philosophers in the wild, in AI labs and companies. We need philosophers who can work alongside engineers to jointly discover new ways of thinking and experiencing that might be afforded to us by AI.<\/p><p>What I dream of are philosophical R&amp;D labs that can experiment at the intersection of philosophical conceptual research, AI engineering and product making.<\/p><p><strong>Gardels: <\/strong>Can you give a concrete example?<\/p><p><strong>Rees: <\/strong>I think we live in unprecedented times, so giving an example is difficult. However, there is an important historical reference, the Bauhaus School.<\/p><p>When Walter Gropius founded the Bauhaus, in 1919, many German intellectuals were deeply skeptical of the industrial age. Not so Gropius. He experienced the possibilities that new materials like glass, steel and concrete offered as a conceptual rupture with the 19th century.<\/p><p>And so, he argued \u2013\u2013 very much against the dominant opinion \u2014 that it was the duty of architects and artists to explore these new materials, and to invent forms and products that would lift people out of the 19th and into the 20th century.<\/p><p>Today, we need something akin to the Bauhaus \u2014 but focused on AI.<\/p><p>We need philosophical R&amp;D labs that would allow us to explore and practice AI as the experimental philosophy it is.<\/p><p>Billions are being poured into many different aspects of AI but very little into the kind of philosophical work that can help us discover and invent new concepts \u2014 new vocabularies for being human \u2014 in the world today. The Antikythera project of the Berggruen Institute under the leadership of Bratton is one small exception.<\/p><p>Philosophical R&amp;D labs will not happen automatically. There will be no new guiding philosophies or philosophical ideas if we do not make strategic investments.<\/p><p>In the absence of new concepts, people \u2014 the public as much as engineers \u2014 will continue to understand the new in terms of the old. As this doesn\u2019t work, there will be decades of turmoil.<\/p>\n          <div class=\"eos-subscribe-push\">\n          \n            <a target=\"https:\/\/shop.noemamag.com\/?utm_source=BottomCTA&utm_medium=website\" href=\"https:\/\/shop.noemamag.com\/?utm_source=BottomCTA&utm_medium=website\" data-wpel-link=\"internal\">Enjoy the read? Subscribe to get the best of Noema.<\/a>\n            \n          <\/div>\n        ","protected":false},"excerpt":{"rendered":"","protected":false},"author":6570,"featured_media":0,"template":"","wpm-article-type":[5],"wpm-article-topic":[21,23,20],"wpm-article-tag":[],"class_list":["post-80202","wpm-article","type-wpm-article","status-publish","hentry","wpm-article-type-interview","wpm-article-topic-digital-society","wpm-article-topic-philosophy-culture","wpm-article-topic-technology-and-the-human"],"acf":[],"apple_news_notices":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.0 (Yoast SEO v25.0) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Why AI Is A Philosophical Rupture<\/title>\n<meta name=\"description\" content=\"The symbiosis of humans and technology portends a new AIxial Age.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.noemamag.com\/why-ai-is-a-philosophical-rupture\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Why AI Is A Philosophical Rupture\" \/>\n<meta property=\"og:description\" content=\"The symbiosis of humans and technology portends a new AIxial Age.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.noemamag.com\/why-ai-is-a-philosophical-rupture\/\" \/>\n<meta property=\"og:site_name\" content=\"NOEMA\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/NoemaMag\" \/>\n<meta property=\"article:modified_time\" content=\"2025-06-12T00:36:25+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/noemamag.imgix.net\/2025\/02\/card-display-tobias.png?fit=scale&fm=png&h=563&ixlib=php-3.3.1&w=1024&wpsize=large&s=71ab38e3dc6d5fc7d875356c7ae81acc\" \/>\n\t<meta property=\"og:image:width\" content=\"1024\" \/>\n\t<meta property=\"og:image:height\" content=\"563\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:image\" content=\"https:\/\/noemamag.imgix.net\/2025\/02\/Noema-Twitter-Card-Vertical-Template-87.png?fm=png&ixlib=php-3.3.1&s=acc4cce65665a4ca77b03fae9e2130c1\" \/>\n<meta name=\"twitter:site\" content=\"@NoemaMag\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"36 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.noemamag.com\/why-ai-is-a-philosophical-rupture\/\",\"url\":\"https:\/\/www.noemamag.com\/why-ai-is-a-philosophical-rupture\/\",\"name\":\"Why AI Is A Philosophical Rupture\",\"isPartOf\":{\"@id\":\"https:\/\/www.noemamag.com\/#website\"},\"datePublished\":\"2025-02-04T17:55:23+00:00\",\"dateModified\":\"2025-06-12T00:36:25+00:00\",\"description\":\"The symbiosis of humans and technology portends a new AIxial Age.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.noemamag.com\/why-ai-is-a-philosophical-rupture\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.noemamag.com\/why-ai-is-a-philosophical-rupture\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.noemamag.com\/why-ai-is-a-philosophical-rupture\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.noemamag.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Why AI Is A Philosophical Rupture\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.noemamag.com\/#website\",\"url\":\"https:\/\/www.noemamag.com\/\",\"name\":\"NOEMA\",\"description\":\"Noema Magazine\",\"publisher\":{\"@id\":\"https:\/\/www.noemamag.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.noemamag.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.noemamag.com\/#organization\",\"name\":\"NOEMA\",\"url\":\"https:\/\/www.noemamag.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.noemamag.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/noemamag.imgix.net\/2023\/11\/noema-logo.png?fm=png&ixlib=php-3.3.1&s=5f5be9b261a7cf7e336f6f6beea6e539\",\"contentUrl\":\"https:\/\/noemamag.imgix.net\/2023\/11\/noema-logo.png?fm=png&ixlib=php-3.3.1&s=5f5be9b261a7cf7e336f6f6beea6e539\",\"width\":305,\"height\":69,\"caption\":\"NOEMA\"},\"image\":{\"@id\":\"https:\/\/www.noemamag.com\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/NoemaMag\",\"https:\/\/x.com\/NoemaMag\"]}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Why AI Is A Philosophical Rupture","description":"The symbiosis of humans and technology portends a new AIxial Age.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.noemamag.com\/why-ai-is-a-philosophical-rupture\/","og_locale":"en_US","og_type":"article","og_title":"Why AI Is A Philosophical Rupture","og_description":"The symbiosis of humans and technology portends a new AIxial Age.","og_url":"https:\/\/www.noemamag.com\/why-ai-is-a-philosophical-rupture\/","og_site_name":"NOEMA","article_publisher":"https:\/\/www.facebook.com\/NoemaMag","article_modified_time":"2025-06-12T00:36:25+00:00","og_image":[{"width":1024,"height":563,"url":"https:\/\/noemamag.imgix.net\/2025\/02\/card-display-tobias.png?fit=scale&fm=png&h=563&ixlib=php-3.3.1&w=1024&wpsize=large&s=71ab38e3dc6d5fc7d875356c7ae81acc","type":"image\/png"}],"twitter_card":"summary_large_image","twitter_image":"https:\/\/noemamag.imgix.net\/2025\/02\/Noema-Twitter-Card-Vertical-Template-87.png?fm=png&ixlib=php-3.3.1&s=acc4cce65665a4ca77b03fae9e2130c1","twitter_site":"@NoemaMag","twitter_misc":{"Est. reading time":"36 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.noemamag.com\/why-ai-is-a-philosophical-rupture\/","url":"https:\/\/www.noemamag.com\/why-ai-is-a-philosophical-rupture\/","name":"Why AI Is A Philosophical Rupture","isPartOf":{"@id":"https:\/\/www.noemamag.com\/#website"},"datePublished":"2025-02-04T17:55:23+00:00","dateModified":"2025-06-12T00:36:25+00:00","description":"The symbiosis of humans and technology portends a new AIxial Age.","breadcrumb":{"@id":"https:\/\/www.noemamag.com\/why-ai-is-a-philosophical-rupture\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.noemamag.com\/why-ai-is-a-philosophical-rupture\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/www.noemamag.com\/why-ai-is-a-philosophical-rupture\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.noemamag.com\/"},{"@type":"ListItem","position":2,"name":"Why AI Is A Philosophical Rupture"}]},{"@type":"WebSite","@id":"https:\/\/www.noemamag.com\/#website","url":"https:\/\/www.noemamag.com\/","name":"NOEMA","description":"Noema Magazine","publisher":{"@id":"https:\/\/www.noemamag.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.noemamag.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.noemamag.com\/#organization","name":"NOEMA","url":"https:\/\/www.noemamag.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.noemamag.com\/#\/schema\/logo\/image\/","url":"https:\/\/noemamag.imgix.net\/2023\/11\/noema-logo.png?fm=png&ixlib=php-3.3.1&s=5f5be9b261a7cf7e336f6f6beea6e539","contentUrl":"https:\/\/noemamag.imgix.net\/2023\/11\/noema-logo.png?fm=png&ixlib=php-3.3.1&s=5f5be9b261a7cf7e336f6f6beea6e539","width":305,"height":69,"caption":"NOEMA"},"image":{"@id":"https:\/\/www.noemamag.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/NoemaMag","https:\/\/x.com\/NoemaMag"]}]}},"parsely":{"version":"1.1.0","canonical_url":"https:\/\/noemamag.com\/why-ai-is-a-philosophical-rupture","smart_links":{"inbound":0,"outbound":0},"traffic_boost_suggestions_count":0,"meta":{"@context":"https:\/\/schema.org","@type":"NewsArticle","headline":"Why AI Is A Philosophical Rupture","url":"http:\/\/www.noemamag.com\/why-ai-is-a-philosophical-rupture","mainEntityOfPage":{"@type":"WebPage","@id":"http:\/\/www.noemamag.com\/why-ai-is-a-philosophical-rupture"},"thumbnailUrl":"","image":{"@type":"ImageObject","url":""},"articleSection":"Uncategorized","author":[{"@type":"Person","name":"Tobias Rees"}],"creator":["Tobias Rees"],"publisher":{"@type":"Organization","name":"NOEMA","logo":"https:\/\/www.noemamag.com\/wp-content\/uploads\/2020\/06\/cropped-ms-icon-310x310-1.png"},"keywords":[],"dateCreated":"2025-02-04T17:55:23Z","datePublished":"2025-02-04T17:55:23Z","dateModified":"2025-06-12T00:36:25Z"},"rendered":"<script type=\"application\/ld+json\" class=\"wp-parsely-metadata\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@type\":\"NewsArticle\",\"headline\":\"Why AI Is A Philosophical Rupture\",\"url\":\"http:\\\/\\\/www.noemamag.com\\\/why-ai-is-a-philosophical-rupture\",\"mainEntityOfPage\":{\"@type\":\"WebPage\",\"@id\":\"http:\\\/\\\/www.noemamag.com\\\/why-ai-is-a-philosophical-rupture\"},\"thumbnailUrl\":\"\",\"image\":{\"@type\":\"ImageObject\",\"url\":\"\"},\"articleSection\":\"Uncategorized\",\"author\":[{\"@type\":\"Person\",\"name\":\"Tobias Rees\"}],\"creator\":[\"Tobias Rees\"],\"publisher\":{\"@type\":\"Organization\",\"name\":\"NOEMA\",\"logo\":\"https:\\\/\\\/www.noemamag.com\\\/wp-content\\\/uploads\\\/2020\\\/06\\\/cropped-ms-icon-310x310-1.png\"},\"keywords\":[],\"dateCreated\":\"2025-02-04T17:55:23Z\",\"datePublished\":\"2025-02-04T17:55:23Z\",\"dateModified\":\"2025-06-12T00:36:25Z\"}<\/script>","tracker_url":"https:\/\/cdn.parsely.com\/keys\/noemamag.com\/p.js"},"_links":{"self":[{"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/80202","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article"}],"about":[{"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/types\/wpm-article"}],"author":[{"embeddable":true,"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/users\/6570"}],"wp:attachment":[{"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/media?parent=80202"}],"wp:term":[{"taxonomy":"wpm-article-type","embeddable":true,"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article-type?post=80202"},{"taxonomy":"wpm-article-topic","embeddable":true,"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article-topic?post=80202"},{"taxonomy":"wpm-article-tag","embeddable":true,"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article-tag?post=80202"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}