{"id":81753,"date":"2025-04-08T15:52:37","date_gmt":"2025-04-08T15:52:37","guid":{"rendered":"https:\/\/www.noemamag.com"},"modified":"2025-06-16T16:45:51","modified_gmt":"2025-06-16T16:45:51","slug":"ai-is-evolving-and-changing-our-understanding-of-intelligence","status":"publish","type":"wpm-article","link":"https:\/\/www.noemamag.com\/ai-is-evolving-and-changing-our-understanding-of-intelligence","title":{"rendered":"AI Is Evolving \u2014\u00a0And Changing Our Understanding Of Intelligence"},"content":{"rendered":"<p>Dramatic advances in artificial intelligence today are compelling us to rethink our understanding of what intelligence truly <em>is<\/em>. Our new insights will enable us to build better AI and understand ourselves better.<\/p><p>In short, we are in paradigm-shifting territory.<\/p><div>\n    <iframe loading=\"lazy\" id=\"noa-web-audio-player\"\n            style=\"border: none\"\n            src=\"https:\/\/embed-player.newsoveraudio.com\/v4?key=n0e13g&#038;id=https:\/\/www.noemamag.com\/ai-is-evolving-and-changing-our-understanding-of-intelligence\/&#038;bgColor=F3F3F3&#038;color=6D6D6D&#038;progressBgColor=F7F7F7&#038;progressBorderColor=6D6D6D&#038;playColor=F3F3F3&#038;titleColor=383D3D&#038;timeColor=6D6D6D&#038;speedColor=6D6D6D&#038;noaLinkColor=6D6D6D&#038;noaLinkHighlightColor=039BE5\"\n            width=\"100%\" height=\"110px\"><\/iframe>\n<\/div><p>Paradigm shifts are often fraught because it\u2019s easier to adopt new ideas when they are compatible with one\u2019s existing worldview but harder when they\u2019re not. A classic example is the collapse of the geocentric paradigm, which dominated cosmological thought for roughly two millennia. In the geocentric model, the Earth stood still while the Sun, Moon, planets and stars revolved around us. The belief that we were at the center of the universe \u2014 bolstered by Ptolemy\u2019s theory of epicycles, a major scientific achievement in its day \u2014 was both intuitive and compatible with religious traditions. Hence, Copernicus\u2019s heliocentric paradigm wasn\u2019t just a scientific advance but a hotly contested heresy and perhaps even, for some, as Benjamin Bratton notes, an <a href=\"https:\/\/www.noemamag.com\/the-five-stages-of-ai-grief\/\">existential trauma<\/a>. So, today, artificial intelligence.<\/p><p>In this essay, we will describe five interrelated paradigm shifts informing our development of AI:<\/p><ol class=\"wp-block-list\"><li><strong><em>Natural Computing<\/em><\/strong> \u2014 Computing existed in nature long before we built the first \u201cartificial computers.\u201d Understanding computing as a natural phenomenon will enable fundamental advances not only in computer science and AI but also in physics and biology.<\/li>\n\n<li><strong><em>Neural Computing<\/em><\/strong> \u2014&nbsp;Our brains are an exquisite instance of natural computing. Redesigning the computers that power AI so they work more like a brain will greatly increase AI\u2019s energy efficiency \u2014 and its capabilities too.<\/li>\n\n<li><strong><em>Predictive Intelligence<\/em><\/strong> \u2014 The success of large language models (LLMs) shows us something fundamental about the nature of intelligence: it involves statistical modeling of the future (including one\u2019s own future actions) given evolving knowledge, observations and feedback from the past. This insight suggests that current distinctions between designing, training and running AI models are transitory; more sophisticated AI will evolve, grow and learn continuously and interactively, as we do.<\/li>\n\n<li><strong><em>General Intelligence<\/em><\/strong> \u2014 Intelligence does not necessarily require biologically based computation. Although AI models will continue to improve, they are already broadly capable, tackling an increasing range of cognitive tasks with a skill level approaching and, in some cases, exceeding individual human capability. In this sense, \u201cArtificial General Intelligence\u201d (AGI) may already be here \u2014 we just keep shifting the goalposts.<\/li>\n\n<li><strong><em>Collective Intelligence<\/em><\/strong> \u2014 Brains, AI agents and societies can all become more capable through increased scale. However, size alone is not enough. Intelligence is fundamentally social, powered by cooperation and the division of labor among many agents. In addition to causing us to rethink the nature of human (or \u201cmore than human\u201d) intelligence, this insight suggests social aggregations of intelligences and multi-agent approaches to AI development that could reduce computational costs, increase AI heterogeneity and reframe AI safety debates.<\/li><\/ol><p>Perhaps the greatest Copernican trauma of the AI era is simply coming to terms with how commonplace general and nonhuman intelligence may be. But to understand our own \u201cintelligence geocentrism,\u201d we must begin by reassessing our assumptions about the nature of computing, since it is the foundation of both AI and, we will argue, intelligence in any form.<\/p><h2 class=\"wp-block-heading\" id=\"h-natural-computation\">Natural Computation<\/h2><p>Is \u201ccomputer science\u201d a science at all? Often, it\u2019s regarded more as an engineering discipline, born alongside the World War II-era Electrical Numerical Integrator and Computer (ENIAC), the first fully programmable general-purpose electronic computer \u2014and the distant ancestor of your smartphone.<\/p><p>Theoretical computer science predates computer engineering, though. A groundbreaking <a href=\"https:\/\/londmathsoc.onlinelibrary.wiley.com\/doi\/pdf\/10.1112\/plms\/s2-42.1.230\">1936 publication<\/a> by British mathematician Alan Turing introduced the imaginary device we now call the Turing Machine, consisting of a head that can move left or right along a tape, reading, erasing and writing symbols on the tape according to a set of rules. Endowed with suitable rules, a Turing Machine can follow instructions encoded on the tape \u2014 what we\u2019d now call a computer program, or code \u2014 allowing such a \u201cUniversal Turing Machine\u201d (UTM) to carry out arbitrary computations. Turning this around, a computation is anything that can be done by a UTM. When the ENIAC was completed in 1945, it became the world\u2019s first real-life UTM.<\/p><p>Or maybe not. A small but growing roster of unorthodox researchers with deep backgrounds in both physics and computer science, such as Susan Stepney at the University of York, have made <a href=\"https:\/\/royalsocietypublishing.org\/doi\/full\/10.1098\/rspa.2014.0182\">the case<\/a> in&nbsp; 2014 in the research journal \u201cProceedings of The Royal Society A\u201d that the natural world is full of computational systems \u201cwhere there is no obvious human computer user.\u201d John Wheeler, a towering figure in 20th-century physics, championed the radical \u201cit from bit\u201d hypothesis, which holds that the underlying structure of the universe is computational. According to Wheeler, the elementary phenomena we take to be physical \u2014quarks, electrons, photons \u2014 are products of <a href=\"https:\/\/philarchive.org\/archive\/WHEIPQ\">underlying computation<\/a>, like internet packets or image pixels.<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;Perhaps the greatest Copernican trauma of the AI era is simply coming to terms with how commonplace general and nonhuman intelligence may be.&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/81753\"\n        data-a2a-title='\"Perhaps the greatest Copernican trauma of the AI era is simply coming to terms with how commonplace general and nonhuman intelligence may be.\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>In some interpretations of quantum mechanics, this computation takes place in a multiverse \u2014 that is, vast numbers of calculations occurring in parallel, entangled universes. However one interprets the underlying physics, the very real technology of <a href=\"https:\/\/blog.google\/technology\/research\/google-willow-quantum-chip\/\">quantum computing<\/a> taps into that parallelism, allowing us to perform certain calculations in minutes that would require the lifetime of the universe several times over on today\u2019s most powerful supercomputers. This is, by any measure, a <a href=\"https:\/\/www.foreignaffairs.com\/united-states\/race-lead-quantum-future-chou-manyika-neven\">paradigm shift<\/a> in computing.<\/p><p>Claims that computing underlies physical reality are hard to prove or disprove, but a clear-cut case for computation in nature came to light far earlier than Wheeler\u2019s \u201cit from bit\u201d hypothesis. John von Neumann, an accomplished mathematical physicist and another founding figure of computer science, discovered a profound link between computing and biology as far back as <a href=\"https:\/\/psycnet.apa.org\/record\/1952-04498-005\">1951<\/a>.<\/p><p>Von Neumann <a href=\"https:\/\/archive.org\/details\/theoryofselfrepr00vonn_0\/page\/n5\/mode\/2up\">realized<\/a> that for a complex organism to reproduce, it would need to contain instructions for building itself, along with a machine for reading and executing that instruction \u201ctape.\u201d The tape must also be copyable and include the instructions for building the machine that reads it. As it happens, the technical requirements for that \u201cuniversal constructor\u201d correspond precisely to the technical requirements for a UTM. Remarkably, von Neumann\u2019s insight anticipated the discovery of DNA\u2019s Turing-tape-like structure and function in <a href=\"https:\/\/www.nature.com\/articles\/171737a0\">1953<\/a>.<\/p><p>Von Neumann had shown that life is inherently computational. This may sound surprising, since we think of computers as decidedly <em>not<\/em> alive, and of living things as most definitely not computers. But it\u2019s true: DNA <em>is<\/em> code \u2014 although the code is hard to reverse-engineer and doesn\u2019t execute sequentially. Living things necessarily compute, not only to reproduce, but to develop, grow and heal. And it is becoming increasingly possible to edit or program foundational <a href=\"https:\/\/www.nature.com\/articles\/s41592-024-02338-y\">biological systems<\/a>.<\/p><p>Turing, too, made a seminal contribution to theoretical biology, by describing how tissue growth and differentiation could be implemented by cells capable of sensing and emitting chemical signals he called \u201c<a href=\"https:\/\/royalsocietypublishing.org\/doi\/10.1098\/rstb.1952.0012\">morphogens<\/a>\u201d \u2014 a powerful form of analog computing. Like von Neumann, Turing got this <a href=\"https:\/\/www.nature.com\/articles\/287795a0\">right<\/a>, despite never setting foot in a biology lab.<\/p><p>By revealing the computational basis of biology, Turing and von Neumann <a href=\"https:\/\/www.pnas.org\/doi\/10.1073\/pnas.2220022120\">laid<\/a> the foundations for <a href=\"https:\/\/www.stevenlevy.com\/artificial-life\">artificial life<\/a> or \u201cALife,\u201d a field that today remains obscure and pre-paradigmatic \u2014 much like artificial intelligence was until recently.<\/p><p>Yet there is every reason to believe that ALife will soon flower, as AI has. Real progress in AI had to wait until we could muster enough \u201cartificial\u201d computation to model (or at least mimic) the activity of the billions of neurons it takes to approach brain-like complexity. <em>De novo<\/em> ALife needs to go much further, recapitulating the work of billions of years of evolution on Earth. That remains a heavy lift. We are making progress, though.<\/p><p>Recent experiments from our <a href=\"https:\/\/github.com\/paradigms-of-intelligence\">Paradigms of Intelligence<\/a> team at Google have shown that in a simulated toy universe capable of supporting computation&nbsp;we can go from nothing but randomness to having minimal \u201clife forms\u201d <a href=\"https:\/\/arxiv.org\/abs\/2406.19108\">emerge spontaneously<\/a>.&nbsp; One such experiment involves starting with a \u201csoup\u201d of random strings, each of which is 64 bytes long. Eight out of the 256 possible byte values correspond to the instructions of a minimal programming language from the 1990s called \u201c<a href=\"https:\/\/esolangs.org\/wiki\/Brainfuck\">Brainfuck<\/a>.\u201d These strings of bytes can be thought of as Turing tapes, and the eight computer instructions specify the elementary operations of a Turing machine. The experiment consists of repeatedly picking two tapes out of the soup at random, splicing them together, \u201crunning\u201d the spliced tape, separating the tapes again, and putting them back in the soup. In the beginning, nothing much appears to happen; we see only random tapes, with a byte modified now and then, apparently at random. But after a few million interactions, functional tapes emerge and begin to self-replicate: minimal artificial life.<\/p><p>The emergence of artificial life looks like a phase transition, as when water freezes or boils. But whereas conventional phases of matter are characterized by their statistical uniformity \u2014 an ordered atomic lattice for ice, random atomic positions for gas and somewhere in between for liquid \u2014 living matter is vastly more complex, exhibiting varied and purposeful structure at every scale. This is because computation requires distinct functional parts that must work together, as evident in any machine, organism or program.<\/p><p>There\u2019s something magical about watching complex, purposeful and functional structures emerging out of random noise in our simulations. But there is nothing supernatural or miraculous about it. Similar phase transitions from non-life to life occurred on Earth billions of years ago, and we can hypothesize similar events taking place on other life-friendly planets or moons.<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;Life is computational because its stability depends on growth, healing or reproduction; and computation itself must evolve to support these essential functions.&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/81753\"\n        data-a2a-title='\"Life is computational because its stability depends on growth, healing or reproduction; and computation itself must evolve to support these essential functions.\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>How could the intricacy of life ever arise, let alone persist, in a random environment? The answer: anything life-like that self-heals or reproduces is more \u201cdynamically stable\u201d than something inert or non-living because a living entity (or its progeny) will still be around in the future, while anything inanimate degrades over time, succumbing to randomness. Life is computational because its stability depends on growth, healing or reproduction; and computation itself must evolve to support these essential functions.<\/p><p>This computational view of life also offers insight into life\u2019s increasing complexity over evolutionary time. Because computational matter \u2014 including life itself \u2014 is made out of distinct parts that must work together, evolution operates simultaneously on the parts and on the whole, a process known in biology as \u201cmultilevel selection.\u201d<\/p><p>Existing parts (or organisms) can combine repeatedly to make ever larger, more complex entities. Long ago on the primordial sea floor (as the prevailing understanding goes) molecules <a href=\"https:\/\/pubmed.ncbi.nlm.nih.gov\/28785970\/\">came together<\/a> to form self-replicating or \u201cautocatalytic\u201d reaction cycles; these chemical cycles combined with fatty membranes to form the earliest cells; bacteria and archaea combined to form <a href=\"https:\/\/pubmed.ncbi.nlm.nih.gov\/11541392\/\">eukaryotic<\/a> cells; these complex cells combined to form multicellular organisms; and so on. Each such <a href=\"https:\/\/pubmed.ncbi.nlm.nih.gov\/7885442\/\">Major Evolutionary Transition<\/a> has involved a functional symbiosis, a form of interdependency in which previously independent entities joined forces to make a greater whole.<\/p><p>The first rungs of this evolutionary ladder did not involve living entities with heritable genetic codes. However, once the entities joining forces were alive \u2014 and therefore computational \u2014 every subsequent combination increased the potential computing power of the symbiotic whole. Human-level intelligence, many rungs above those earliest life forms, arises from the combined computation of some 86 billion neurons, all processing in parallel.<\/p><h2 class=\"wp-block-heading\" id=\"h-neural-computing\"><strong>Neural Computing<\/strong><\/h2><p>The pioneers of computing were well aware of the computational nature of our brains. In fact, in the 1940s, there was little difference between the nascent fields of computer science and neuroscience. Electronic computers were developed to carry out mental operations on an industrial scale, just as factory machines were developed in the previous century to automate physical labor. Originally, repetitive mental tasks were carried out by <a href=\"https:\/\/www.degruyter.com\/document\/doi\/10.1515\/9781400849369\/html\"><em>human<\/em> computers<\/a> \u2014 like the \u201c<a href=\"https:\/\/www.harpercollins.com\/products\/hidden-figures-margot-lee-shetterly\">hidden figures<\/a>,\u201d women who (often with little acknowledgment and low pay) undertook the lengthy calculations needed for the war effort and later the space race.<\/p><p>Accordingly, the logic gates that make up electronic circuits,&nbsp;at the heart of the new \u201cartificial\u201d computers, were originally conceived of as <a href=\"https:\/\/archive.computerhistory.org\/resources\/text\/Knuth_Don_X4100\/PDF_index\/k-8-pdf\/k-8-u2593-Draft-EDVAC.pdf\">artificial neurons<\/a>. Journalists who referred to computers as \u201c<a href=\"https:\/\/www.npl.co.uk\/getattachment\/about-us\/History\/Famous-faces\/Alan-Turing\/Newspaper-reports-from-1950-about-the-ACE-computer.pdf?lang=en-GB\">electronic brains<\/a>\u201d weren\u2019t just writing the midcentury equivalent of clickbait. They were portraying the ambitions of computer science pioneers. And it was natural enough for those first computer scientists to seek to reproduce <em>any<\/em> kind of thinking.<\/p><p>Those hopes were soon dashed. On one hand, digital computers were a smashing success at the narrowly procedural tasks we knew how to specify. Electronic computers could be programmed to do the work of human computers cheaply, flawlessly and at a massive scale, from calculating rocket trajectories to tracking payroll. On the other hand, by the 1950s, neuroscientists had discovered that real neurons are a good deal more complicated than logic gates.<\/p><p>Worse, it proved impossible to write programs that could perform even the simplest everyday human functions, from visual recognition to basic language comprehension \u2014 let alone nuanced reasoning, literary analysis or artistic creativity. We had (and still have) no idea how to write down exact procedures for such things. The doomed attempt to do so is now known as \u201cGood Old-Fashioned AI\u201d or GOFAI. We set out to make <a href=\"https:\/\/www.youtube.com\/watch?v=Wy4EfdnMZ5g\">HAL 9000<\/a>, and instead, we got \u201cPress 1 to make an appointment; press 2 to modify an existing appointment.\u201d<\/p><p>A purportedly sensible narrative emerged to justify GOFAI\u2019s failure: computers are not brains, and <a href=\"https:\/\/www.frontiersin.org\/journals\/computer-science\/articles\/10.3389\/fcomp.2022.810358\/full\">brains are not computers<\/a>. Any contrary suggestion was na\u00efve, \u201chype\u201d or, at best, an ill-fitting metaphor. There was, perhaps, something reassuring about the idea that human behavior couldn\u2019t be programmed. For the most part, neuroscience and computer science went their separate ways.<\/p><p>\u201cComputational neuroscientists,\u201d however, continued to study the brain as an <a href=\"https:\/\/work.caltech.edu\/neurips.html\">information-processing system<\/a>, albeit one based on a radically different design from those of conventional electronic computers. The brain has no central processing unit or separate memory store, doesn\u2019t run instructions only sequentially and doesn\u2019t use binary logic. Still, as Turing showed, computing is universal. Given enough time and memory, any computer \u2014 whether biological or technological \u2014 can simulate any other computer. Indeed, over the years, neuroscientists have built increasingly accurate computational models of biological neurons and neural networks. Such models can include not only the all-or-none pulses or \u201caction potentials\u201d that most obviously characterize neural activity but also the effects of <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S2589004224025239\">chemical signals<\/a>, <a href=\"https:\/\/www.nature.com\/articles\/s41592-021-01252-x\">gene expression<\/a>, <a href=\"https:\/\/www.nature.com\/articles\/s41598-022-13925-4\">electric fields<\/a> and many other phenomena.<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;Human-level intelligence, many rungs above those earliest life forms, arises from the combined computation of some 86 billion neurons, all processing in parallel.&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/81753\"\n        data-a2a-title='\"Human-level intelligence, many rungs above those earliest life forms, arises from the combined computation of some 86 billion neurons, all processing in parallel.\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>It\u2019s worth pausing here to unpack the word \u201cmodel.\u201d In its traditional usage, as in a model railroad or a financial model, the model is emphatically <em>not<\/em> the real thing. It\u2019s a map, not the actual territory. When neuroscientists build model neural networks, it\u2019s generally in this spirit. They are trying to learn how brains work, not how to make computers think. Accordingly, their models are drastically simplified.<\/p><p>However, computational neuroscience reminds us that the brain, too, is busy computing. And, as such, the function computed by the brain is itself a model. So, the territory <em>is<\/em> a map; that is, if the map were as big as the territory, it <em>would<\/em> be the real thing, just as a model railroad would be if it were full-sized. If we built a fully realized model brain, in other words, it would be capable of modeling us right back!<\/p><p>Even as GOFAI <a href=\"https:\/\/www.noemamag.com\/what-ai-can-tell-us-about-intelligence\/\">underwent<\/a> a repeated boom-and-bust cycle, an alternative \u201cconnectionist\u201d school of thought about how to get computers to think persisted, often intersecting with computational neuroscience. Instead of symbolic logic based on rules specified by a programmer, connectionists embraced \u201cmachine learning,\u201d whereby neural nets could learn from experience \u2014 as we largely do.<\/p><p>Although often overshadowed by GOFAI, the connectionists never stopped trying to make artificial neural nets perform real-life cognitive tasks. Among these <a href=\"https:\/\/arstechnica.com\/ai\/2024\/11\/how-a-stubborn-computer-scientist-accidentally-launched-the-deep-learning-boom\/\">stubborn holdouts<\/a> were Geoffrey Hinton and John Hopfield, who won the Nobel Prize in physics last year for their work on machine learning; many other pioneers in the field, such as American psychologists Frank Rosenblatt and James McClelland and Japanese computer scientist Kunihiko Fukushima, have been less widely recognized. Unfortunately, the 20th-century computing paradigm was (at least until the 1990s) <a href=\"https:\/\/www.amacad.org\/sites\/default\/files\/publication\/downloads\/Daedalus_Sp22_01_Manyika.pdf\">unfriendly to machine learning<\/a>, not only due to widespread skepticism about neural nets but also because programming was inherently symbolic. Computers were made for running instructions sequentially \u2014 a poor fit for neural computing. Originally, this was a design choice.<\/p><p>The first logic gates were created using vacuum tubes, which were unreliable and needed frequent replacement. To make computation as robust as possible, it was natural to base all calculations on a minimum number of distinguishable \u201cstates\u201d for each tube: \u201coff\u201d or \u201con.\u201d Hence binary, which uses only 0 and 1 \u2014 and also happens to be a natural basis for Boolean logic, whose elementary symbols are \u201cTrue\u201d (or 1) and \u201cFalse\u201d (or 0).<\/p><p>It was also natural to build a \u201cCentral Processing Unit\u201d (CPU) using a minimal number of failure-prone tubes, which would then be used to execute one instruction after another. This meant separating processing from memory and using a cable or \u201cbus\u201d to sequentially shuttle data and instructions from the memory to the CPU and back.<\/p><p>This \u201cclassical\u201d computing paradigm flourished for many years thanks to <a href=\"https:\/\/www.chiphistory.org\/20-moore-s-law-original-draft-1965\">Moore\u2019s Law<\/a> \u2014 a famous 1965 observation by Gordon Moore, a future founder of chip maker Intel, that miniaturization was doubling the number of transistors on a chip every year or two. As transistors shrank, they became exponentially faster and cheaper, and consumed less power. So, giant, expensive mainframes became minis, then desktops, then laptops, then phones, then wearables. Computers now exist that are tiny enough to fit through a <a href=\"https:\/\/www.science.org\/doi\/10.1126\/sciadv.abf6312\">hypodermic needle<\/a>. Laptops and phones consist mainly of batteries and screens; the actual computer in such a device \u2014 its \u201csystem on chip,\u201d or SoC \u2014 is only about a square centimeter in area, and a tenth of a millimeter thick. A single drop of water occupies several times that volume.<\/p><p>While this scale progression is remarkable, it doesn\u2019t lead brainward. Your brain is neither tiny nor fast; it runs much more sedately than the computer in a smartwatch. However, recall that it contains 86 billion or so neurons working at the same time. This adds up to a truly vast amount of computation, and because it happens comparatively slowly and uses information stored locally, it is energy efficient. Artificial neural computing remained inefficient, even as computers sped up, because they continued to run instructions sequentially: reading and writing data from a separate memory as needed.<\/p><p>It only became possible to run meaningfully sized neural networks when companies like Nvidia began to&nbsp;design chips with multiple processors running in parallel. Parallelization was partly a response to the petering-out of Moore\u2019s Law in its original form. While transistors continued to shrink, after 2006 or so, they could no longer be made to run faster; the <a href=\"https:\/\/www.cs.utexas.edu\/~lin\/cs380p\/Free_Lunch.pdf\">practical limit<\/a> was a few billion cycles per second.<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;Artificial neural computing remained inefficient, even as computers sped up, because they continued to run instructions sequentially.&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/81753\"\n        data-a2a-title='\"Artificial neural computing remained inefficient, even as computers sped up, because they continued to run instructions sequentially.\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>Parallelizing meant altering the programming model to favor short code fragments (originally called \u201cpixel shaders\u201d since they were designed for graphics) that could execute on many processors simultaneously. Shaders turned out to be ideal for parallelizing neural nets. Hence, the Graphics Processing Unit (GPU), originally designed for gaming, now powers AI. Google\u2019s Tensor Processing Units (<a href=\"https:\/\/cloud.google.com\/blog\/products\/compute\/trillium-sixth-generation-tpu-is-in-preview\">TPUs<\/a>) are based on similar design principles.<\/p><p>Although GPUs and TPUs are a step in the right direction, AI infrastructure today remains hobbled by its classical legacy. We are still far from having chips with <em>billions<\/em> of processors on them, all working in parallel on locally stored data. And AI models are still implemented using sequential instructions. Conventional computer programming, chip architecture and system design are simply not brain-like. We are simulating neural computing on classical computers, which is inefficient \u2014 just as simulating classical computing with brains was, back in the days of human computation.<\/p><p>Over the next few years, though, we expect to see a truly neural computing paradigm emerge. Neural computing may eventually be achieved on photonic, biological, chemical, quantum, or other entirely novel substrates. But even if \u201csilicon brains\u201d are manufactured using familiar chip technologies, their components will be organized differently. Every square centimeter of silicon will contain many millions of information processing nodes, like neurons, all working at once.<\/p><p>These neural chips won\u2019t run programs. Their functionality will be determined not by code (at least not of the sort we have today), but by billions or trillions of numerical parameters stored across the computing area. A neural silicon brain will be capable of being \u201cflashed,\u201d its parameters initialized as desired; but it will also be able to learn from experience, modifying those parameters on the fly. The computation will be decentralized and robust; occasional failures or localized damage won\u2019t matter. It\u2019s no coincidence that this resembles nature\u2019s architecture for building a brain.<\/p><h2 class=\"wp-block-heading\" id=\"h-predictive-intelligence\"><strong>Predictive<\/strong> <strong>Intelligence<\/strong><\/h2><p>For those of us who were involved in the early development of language models, the evident generality of AI based solely on next-word (or \u201cnext-token\u201d) prediction has been paradigm-shifting. Even if we bought into the basic premise that brains are computational, most of us believed that true AI would require discovering some special algorithm, and that algorithm would help clear up the longstanding mysteries of intelligence and consciousness. So, it came as a shock when next-token prediction alone, applied at a massive scale, \u201csolved\u201d intelligence.<\/p><p>Once we got over our shock, we realized that this doesn\u2019t imply that there are no mysteries left, that consciousness is not real, or that the mind is a Wizard of Oz \u201c<a href=\"https:\/\/www.keithfrankish.com\/illusionism-as-a-theory-of-consciousness\/\">illusion<\/a>.\u201d The neural networks behind LLMs are both enormous and provably capable of <a href=\"https:\/\/proceedings.mlr.press\/v202\/giannou23a.html\">any<\/a> computation, just like a classical computer running a program. In fact, LLMs can learn a <a href=\"https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2023\/hash\/b2e63e36c57e153b9015fece2352a9f9-Abstract-Conference.html\">wider<\/a> variety of algorithms than computer scientists have discovered or invented.<\/p><p>Perhaps, then, the shock was unwarranted. We already knew that the brain is computational and that whatever it does must be learnable, either by evolution or by experience \u2014 or else we would not exist. We have simply found ourselves in the odd position of reproducing something before fully understanding it. When Turing and von Neumann made their contributions to computer science, theory was ahead of practice. Today, practice is <a href=\"https:\/\/berggruen.org\/themes\/antikythera\">ahead of theory<\/a>.<\/p><p>Being able to create intelligence in the lab gives us powerful new avenues for investigating its longstanding mysteries, because \u2014 despite <a href=\"https:\/\/dl.acm.org\/doi\/full\/10.1145\/3639372\">claims<\/a> to the contrary \u2014 artificial neural nets are not \u201cblack boxes.\u201d We can not only examine their chains of thought but are also learning to probe them more deeply to conduct \u201c<a href=\"https:\/\/transformer-circuits.pub\/2025\/attribution-graphs\/biology.html\">artificial neuroscience<\/a>.\u201d And unlike biological brains, we can record and analyze every detail of their activity, run perfectly repeatable experiments at large scale, and turn on or off <a href=\"https:\/\/arxiv.org\/abs\/1901.08644\">any part<\/a> of the network to see what it does.<\/p><p>While there are many important differences between AI models and brains, <a href=\"https:\/\/www.cell.com\/fulltext\/S0896-6273(17)30509-3\">comparative<\/a> <a href=\"https:\/\/www.biorxiv.org\/content\/10.1101\/407007v2.abstract\">analyses<\/a> have found striking functional <a href=\"https:\/\/direct.mit.edu\/nol\/article\/5\/1\/43\/119156\">similarities<\/a> between them too, suggesting common underlying principles. After drawing inspiration from decades of brain research, AI is thus starting to pay back its debt to neuroscience, under the banner of \u201c<a href=\"https:\/\/www.nature.com\/articles\/s41467-023-37180-x\">NeuroAI<\/a>.\u201d<\/p><p>Although we don\u2019t yet fully understand the algorithms LLMs learn, we\u2019re starting to grasp <em>why<\/em> learning to predict the next token works so well. The \u201cpredictive brain hypothesis\u201d has a long <a href=\"https:\/\/www.jstor.org\/stable\/184878\">history<\/a> in neuroscience; it holds that brains evolved to continually model and predict the future \u2014 of the perceptual environment, of oneself, of one\u2019s actions, and of their effects on oneself and the environment. Our ability to behave intentionally and intelligently depends on such a model.<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;We are simulating neural computing on classical computers, which is inefficient \u2014 just as simulating classical computing with brains was, back in the days of human computation.&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/81753\"\n        data-a2a-title='\"We are simulating neural computing on classical computers, which is inefficient \u2014 just as simulating classical computing with brains was, back in the days of human computation.\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>Consider reaching for a cup of water. It\u2019s no mean feat to have learned how to model the world and your own body well enough to bring your hand into contact with that cup, wrap your fingers around it, and bring it to your lips and drink \u2014 all in a second or two. At every stage of these movements, your nervous system computes a <a href=\"https:\/\/www.penguinrandomhouse.com\/books\/566315\/being-you-by-anil-seth\/\">prediction<\/a> and compares it with proprioceptive feedback. Your eyes flit across the scene, providing further error correction.<\/p><p>At a higher level, you predict that drinking will quench your thirst. Thirst is itself a predictive signal, though \u201clearned\u201d by an entire species on much longer, evolutionary timescales. Organisms incapable of predicting their need for water won\u2019t survive long enough to pass on their faulty self-models.\n          <div class=\"eos-subscribe-push\">\n            \n            <a target=\"https:\/\/shop.noemamag.com\/?utm_source=MiddleCTA&utm_medium=website\" href=\"https:\/\/shop.noemamag.com\/?utm_source=MiddleCTA&utm_medium=website\" data-wpel-link=\"internal\">Read Noema in print.<\/a>\n            \n          <\/div>\n        <\/p><p>Evolution distills countless prior generations of experience, boiled down to the crude signal of reproductive success or death. Evolutionary learning is at work when a newborn recognizes faces, or, perhaps when a cat that has never seen a snake jumps in fright upon noticing a <a href=\"https:\/\/www.youtube.com\/watch?v=agi4geKb8v8\">cucumber<\/a> placed surreptitiously behind it.<\/p><p>Machine learning involves tuning model parameters that are usually understood to represent synapses \u2014 the connections between neurons that strengthen or weaken through lifelong learning. These parameters are usually initialized randomly. But in brains, neurons wire up according to a genetically encoded (and environmentally sensitive) developmental program. We expect future AI models will similarly be evolved to construct themselves. They will grow and develop dynamically through experience rather than having static, hand-engineered architectures with fixed parameter counts.<\/p><p>Unifying learning across timescales may also eliminate the current dichotomy between model training and normal operation (or \u201cinference\u201d). Today, state-of-the-art training of LLMs is extremely expensive, requiring massive computational resources over months, while inference is comparatively cheap and can be done in real-time. Yet we know that one of the most important skills LLMs learn is <a href=\"https:\/\/proceedings.mlr.press\/v202\/von-oswald23a.html\"><em>how<\/em> to learn<\/a>, which explains why it\u2019s possible for them to handle a novel idea, word or task during a chat session.<\/p><p>For now, though, any such newly acquired knowledge is transient, persisting only as long as it remains within the \u201ccontext window\u201d; the model parameters remain unchanged. Future models that unify action and prediction should be able to learn cumulatively and open-endedly as they go, the way we do.<\/p><p>In a similar vein, we\u2019re starting to see a shift from conceiving of AI model capability as capped by its initial offline training to \u201c<a href=\"https:\/\/arxiv.org\/abs\/2408.03314\">test-time scaling<\/a>,\u201d in which models become more capable simply by taking more time to think through their responses. More brain-like model designs should allow such in-the-moment improvements to accumulate, as they do for us, so that <em>all<\/em> future responses can benefit.<\/p><p>Because the neural networks underlying LLMs are powerful general-purpose predictors, it makes sense that they have proven capable not only of modeling language, sound and video, but also of revolutionizing robotics, like in the earlier example of reaching for a glass of water. Hand-programmed GOFAI struggled for decades with anything beyond the repetitive, routinized robotics of assembly lines. But today, LLM-like \u201c<a href=\"https:\/\/arxiv.org\/abs\/2406.09246\">vision-language-action<\/a>\u201d models can learn how to drive all sorts of robotic bodies, from Waymo vehicles to humanoid (and many other) forms, which are increasingly deployed in complex, unstructured environments.<\/p><p>By using chains of thought and reasoning traces, which break large problems down into smaller intermediate steps, predictive models can even simulate multiple possible outcomes or contingencies, selecting from a tree of potential futures. This kind of \u201cchoiceful\u201d prediction may be the mechanism underlying our notion of free will.<\/p><p>Ultimately, everything organisms do can be thought of as a self-fulfilling prediction. Life is that which predicts itself into continued existence, and through increasing intelligence, that prediction can become ever more sophisticated.<\/p><p>Embracing the paradigm of predictive processing, including the unification of planning, action and prediction, promises not only to further improve language models and robotics, but to also bring the theoretical foundations of machine learning, neuroscience and even theoretical biology onto a common footing.<\/p><h2 class=\"wp-block-heading\" id=\"h-general-intelligence\"><strong>General<\/strong> <strong>Intelligence<\/strong><\/h2><p>According to some, LLMs are counterfeit intelligence: they give the <em>appearance<\/em> of being intelligent without <em>actually<\/em> being so. According to these skeptics, we have trained AI to <a href=\"https:\/\/www.nature.com\/articles\/d41586-023-02361-7\">pass<\/a> the <a href=\"https:\/\/academic.oup.com\/mind\/article-abstract\/LIX\/236\/433\/986238\">Turing Test<\/a> by \u201cautocompleting\u201d enormous numbers of sentences, creating machines that fool us into believing there\u2019s \u201csomeone home\u201d when there is not.<\/p><p>Many hold the opposing view that AI is real and that we\u2019re on the threshold of achieving \u201cArtificial General Intelligence\u201d (AGI) \u2014 though there are wide-ranging views on how to define it. Depending on the individual, this prospect may be exciting, alarming or even existentially threatening.<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;Despite claims to the contrary \u2014 artificial neural nets are not &#8216;black boxes.'&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/81753\"\n        data-a2a-title='\"Despite claims to the contrary \u2014 artificial neural nets are not 'black boxes.'\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>So, which camp is right? The answer might be \u201cneither\u201d: most in both camps hold that AGI is a discrete threshold that will (or won\u2019t) be crossed sometime in the future. In reality, there does not appear to be any such threshold \u2014 or if there is, we may have <a href=\"https:\/\/www.noemamag.com\/artificial-general-intelligence-is-already-here\/\">already crossed it<\/a>.<\/p><p>Let\u2019s address the skeptics first. For many, AI\u2019s ability to perform tasks \u2014 whether chatting, writing poetry, driving cars or even doing something entirely novel \u2014 is irrelevant because the way AI is implemented disqualifies it from being truly intelligent. This view may be justified by asserting that the brain must do something other than \u201cmere\u201d prediction, that the brain is not a computer, or simply that AI models are not alive. Consequently, skeptics often hold that, when applied to AI, terms like \u201cintelligence,\u201d \u201cunderstanding,\u201d \u201cagency,\u201d \u201clearning,\u201d or \u201challucination\u201d require scare quotes because they are inappropriately anthropomorphic.<\/p><p>Is such handwringing over diction warranted? Adopting a functional perspective suggests otherwise. We call both a bird\u2019s wing and a plane\u2019s wing \u201cwings\u201d not because they are made of the same material or work the same way, but because they serve the same function. Should we care whether a plane achieves flight differently than a bird? Not if our concern is with <em>purpose<\/em> \u2014 that is, with why birds and planes have wings in the first place.<\/p><p>Functionalism is a hallmark of all \u201cpurposeful\u201d systems, including organisms, ecologies and technologies. Everything \u201cpurposeful\u201d is made up of mutually interdependent parts, each serving purposes (or functions) for the others. And those parts, too, are often themselves made out of smaller interdependent and purposeful parts.<\/p><p>Whether implicitly or explicitly, many AI skeptics care less about <em>what<\/em> is achieved (flying or intelligence) than about <em>how<\/em> it is achieved. Nature, however, is indifferent to \u201chow.\u201d For the sake of flexibility or robustness, engineered and natural systems alike often involve the substitution or concurrent use of parts that serve the same function but work differently. For instance, in logistics, railroads and trucks both transport goods; as a customer, you only care about getting your delivery. In your cells, aerobic or anaerobic respiration may serve the same function, with the anaerobic pathway kicking in when you exercise too hard for aerobic respiration to keep up.<\/p><p>The nervous system is no different. It, too, consists of parts with functional relationships, and these, too, can be swapped out for functional equivalents. We already do this, to a degree, with cochlear implants and artificial retinas, though these prostheses can\u2019t yet approach the quality of biological ears or eyes. Eventually, though, neuroprosthetics will rival or exceed the sensory organs we\u2019re born with.<\/p><p>One day, we may even be able to replace damaged brain tissue in the same way. This will work because you have no \u201chomunculus,\u201d no particularly irreplaceable spot in your brain where the \u201cyou\u201d part of you lives. What makes you <em>you<\/em> is not any one part of your brain or body, or your atoms \u2014 they turn over frequently in any case \u2014 nor is it the details of how every part of you is implemented. You are, rather, a highly complex, dynamic set of functional <em>relationships<\/em>.<\/p><p>What about AI models? Not only are LLMs implemented very differently from brains, but their relationships with us are also different from those between people. They don\u2019t have bodies or life stories, kinship or long-term attachments. Such differences are relevant in considering the ethical and legal status of AI. They\u2019re irrelevant, however, to questions of capability, like those about intelligence and <a href=\"https:\/\/spectrum.ieee.org\/theory-of-mind-ai\">understanding<\/a>.<\/p><p>Some researchers agree with all these premises in theory but still maintain that there is a threshold to AGI and current AI systems have not crossed it yet. So how will we know when they do? The answer must involve benchmarks to test the capabilities we believe constitute general intelligence.<\/p><p>Many have been proposed. Some, like AI researcher Fran\u00e7ois Chollet\u2019s \u201c<a href=\"https:\/\/arcprize.org\/arc-agi\">Abstraction and Reasoning Corpus<\/a>,\u201d are IQ-like tests. Others are more holistic; our colleagues at Google DeepMind, for example, have <a href=\"https:\/\/arxiv.org\/abs\/2311.02462\">emphasized<\/a> the need to focus on capabilities rather than processes, stressing the need for a generally intelligent agent to be competent at a \u201cwide range of non-physical tasks, including metacognitive tasks like learning new skills.\u201d But which tasks should one assess? Outside certain well-defined skills within competitive markets, we may find it difficult to meaningfully bucket ourselves into \u201ccompetent\u201d (50th percentile), \u201cexpert\u201d (90th percentile) and \u201cvirtuoso\u201d (99th percentile).<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;For the sake of flexibility or robustness, engineered and natural systems alike often involve the substitution or concurrent use of parts that serve the same function but work differently.&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/81753\"\n        data-a2a-title='\"For the sake of flexibility or robustness, engineered and natural systems alike often involve the substitution or concurrent use of parts that serve the same function but work differently.\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>The original definition of AGI dates to <a href=\"https:\/\/arxiv.org\/abs\/2308.03598\">at least 2002<\/a>, and can be described most simply as \u201cgeneral cognitive capabilities typical for humans,\u201d as computer scientists Peter Voss and Mla\u0111an Jovanovi\u0107 put it in a 2023 paper. But some frame these capabilities only in economic terms. OpenAI\u2019s <a href=\"https:\/\/openai.com\/our-structure\/\">website<\/a> defines AGI as \u201ca highly autonomous system that outperforms humans at most economically valuable work.\u201d In 2023, AI entrepreneur Mustafa Suleyman (now CEO of Microsoft AI) <a href=\"https:\/\/www.technologyreview.com\/2023\/07\/14\/1076296\/mustafa-suleyman-my-new-turing-test-would-see-if-ai-can-make-1-million\/\">suggested<\/a> that an AI will be generally \u201ccapable\u201d when it can make a million dollars.<\/p><p>Such thresholds are both arbitrary and inconsistent with the way we think about human intelligence. Why insist on economic activity at all? How much money do we need to make to count as smart, and are those of us who have not managed to amass a fortune <em>not<\/em> generally intelligent?<\/p><p>Of course, we\u2019re motivated to build AI by the prospect of enriching or expanding humanity, whether scientifically, economically or socially. But economic measures of productivity are neither straightforward nor do they map cleanly to intelligence. They also exclude a great deal of human labor whose value is <a href=\"https:\/\/go.gale.com\/ps\/i.do?id=GALE%7CA8257991&amp;sid=googleScholar&amp;v=2.1&amp;it=r&amp;linkaccess=abs&amp;issn=00270520&amp;p=AONE&amp;sw=w&amp;userGroupName=anon%7E3809e2bb&amp;aty=open-web-entry\">not accounted for economically<\/a>. Focusing on the \u201cecological validity\u201d of tasks \u2014 that is, on whether they matter to others, whether economically, artistically, socially, emotionally or in any other way \u2014 emphasizes the difficulty of any purely objective performance evaluation.<\/p><p>Today\u2019s LLMs can already perform a wide and growing array of cognitive tasks that, a few years ago, any reasonable person would have agreed require high intelligence: from breaking down a complex argument to writing code to softening the tone of an email to researching a topic online. In nearly any given domain, a human expert can still do better. (This is the performance gap many <a href=\"https:\/\/paperswithcode.com\/dataset\/mmlu\">current<\/a> evaluation methodologies try to measure.) But let\u2019s acknowledge that no single human \u2014 no matter how intelligent \u2014 possesses a comparable breadth of skills. In the past few years, we have quietly switched from measuring AI performance relative to <em>anyone<\/em> to assessing it relative to <em>everyone<\/em>. Put another way, individual humans are now <em>less<\/em> \u201cgeneral\u201d than AI models.<\/p><p>This progress has been swift but continuous. We think the goalposts keep moving in part because no single advance seems decisive enough to warrant declaring AGI success. There\u2019s always more to do. Yet we believe that if an AI researcher in 2002 could somehow interact with any of today\u2019s LLMs, that researcher would, without hesitation, say that AGI is here.<\/p><p>One key to achieving the \u201cgeneral\u201d in AGI has been \u201cunsupervised training,\u201d which involves machine learning without stipulating a task. Fine-tuning and reinforcement learning are usually applied afterward to enhance particular skills and behavioral attributes, but most of today\u2019s model training is generic. AI\u2019s broad capabilities arise by learning to model language, sound, vision or anything else. Once a model can work with such modalities generically, then, like us, it can be instructed to perform any task \u2014 even an entirely novel one \u2014 as long as that task is first described, inferred or shown by example.<\/p><p>To understand how we\u2019ve achieved artificial general intelligence, why it has only happened recently, after decades of failed attempts, and what this tells us about our own minds, we must re-examine our most fundamental assumptions \u2014 not just about AI, but about the nature of computing itself.<\/p><h2 class=\"wp-block-heading\" id=\"h-collective-intelligence\"><strong>Collective Intelligence<\/strong><\/h2><p>The \u201c<a href=\"https:\/\/web-archive.southampton.ac.uk\/cogprints.org\/2694\/1\/SocialFunctionTxt.pdf?ncid=txtlnkusaolp00000618\">social intelligence hypothesis<\/a>\u201d holds that intelligence explosions in brainy species like ours arose due to a social feedback loop. Our survival and reproductive success depend on our ability to make friends, attract partners, access shared resources and, not least, convince others to help <a href=\"https:\/\/www.hup.harvard.edu\/books\/9780674060326\">care for our children<\/a>. All of these require \u201ctheory of mind,\u201d the ability to put oneself in another\u2019s shoes: What does the other person see and feel? What are they thinking? What do they know, and what <em>don\u2019t<\/em> they know? How will they behave?<\/p><p>Keeping track of the mental states of others is a cognitive challenge. Across primate species, researchers have observed <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/004724849290081J\">correlations<\/a> between brain size and troop size. Among humans, the volumes of their brain areas associated with theory of mind <a href=\"https:\/\/royalsocietypublishing.org\/doi\/abs\/10.1098\/rspb.2011.2574\">correlates<\/a> to the numbers of friends they have. We also know that people with more friends tend to be healthier and live longer than those who are socially <a href=\"https:\/\/journals.sagepub.com\/doi\/full\/10.1177\/1745691614568352\">isolated<\/a>. Taken together, these observations are evidence of ongoing selection pressure favoring a <a href=\"https:\/\/www.littlebrown.co.uk\/titles\/robin-dunbar\/friends\/9780349143576\/\">social brain<\/a>.<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;We have quietly switched from measuring AI performance relative to anyone to assessing it relative to everyone. Put another way, individual humans are now less &#8216;general&#8217; than AI models.&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/81753\"\n        data-a2a-title='\"We have quietly switched from measuring AI performance relative to anyone to assessing it relative to everyone. Put another way, individual humans are now less 'general' than AI models.\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>While theory of mind has a <a href=\"https:\/\/www.press.jhu.edu\/books\/title\/9383\/chimpanzee-politics\">Machiavellian<\/a> side, it\u2019s also essential for the advanced forms of cooperation that make humans special. Teaching and learning, division of labor, the maintenance of reputation and the mental accounting of \u201cIOUs\u201d all rely on theory of mind. Hence, so does the development of any nontrivial economy, political system or technology. Since tribes or communities that can cooperate at scale function as larger, more capable wholes, theory of mind doesn\u2019t only deliver individual benefits; it also benefits the group.<\/p><p>As this group-level benefit becomes decisive, the social aggregation of minds tips into a Major Evolutionary Transition \u2014 a symbiosis, if you recall, in which previously independent entities join forces to make <a href=\"https:\/\/www.hachettebookgroup.com\/titles\/mark-w-moffett\/the-human-swarm\/9781549195082\/\">something new<\/a> and greater. The price of aggregation is that formerly independent entities can no longer survive and reproduce on their own. That\u2019s a fair description of modern urbanized society: How many of us could survive in the woods on our own?<\/p><p>We are a superorganism. As such, our intelligence is already collective and, therefore, in a sense, superhuman. That\u2019s why, when we train LLMs on the collective output of large numbers of people, we are already creating a superintelligence with far greater breadth and average depth than any single person \u2014 even though LLMs still usually fall short of individual human experts within their domains of expertise.<\/p><p>This is what motivates <a href=\"https:\/\/arxiv.org\/abs\/2501.14249\">Humanity\u2019s Last Exam<\/a>, a (rather grimly named) recent attempt to create an AI benchmark that LLMs can\u2019t yet ace. The test questions were written by nearly 1,000 experts in more than 100 fields, requiring such skills as translating Palmyrene script from a Roman tombstone or knowing how many paired tendons are supported by a hummingbird\u2019s sesamoid bone. An expert classicist could answer the former, and an expert ornithologist could answer the latter, but we suspect that median human performance on the exam would be close to zero. By contrast, state-of-the-art models today score between<a href=\"https:\/\/www.techradar.com\/computing\/artificial-intelligence\/could-you-pass-humanitys-last-exam-probably-not-but-neither-can-ai\"> 3.3% <\/a>and <a href=\"https:\/\/blog.google\/technology\/google-deepmind\/gemini-model-thinking-updates-march-2025\/\">18.8%<\/a>.<\/p><p>Humanity is superintelligent thanks to its cognitive division of labor; in a sense, that is true of an individual brain, too. AI pioneer Marvin Minsky described a \u201c<a href=\"https:\/\/www.jstor.org\/stable\/20708493\">Society of Mind<\/a>,\u201d postulating that our apparently singular \u201cselves\u201d are really hive minds consisting of many specialized interacting agents. Indeed, our cerebral cortex consists of an array of \u201ccortical columns,\u201d repeating units of neural circuitry tiled many times to form an extended surface. Although the human cortex is only about 2 to 4.5 millimeters thick, its area can be as large as 2,500 square centimeters (the brain\u2019s wrinkled appearance is a consequence of cramming the equivalent of a large dinner napkin into our skulls). Our cortex was able to expand quickly when evolutionary pressures demanded it precisely because of its modular design. In effect, we simply added more cortical columns.<\/p><p>Cortical modularity is not just developmental but functional. Some parts of the cortex specialize in visual processing, others in auditory processing, touch and so on; still others appear to specialize in social modeling, writing and numeracy. Since these tasks are so diverse, one might assume each corresponding region of the brain is as specialized and different from the other as a dishwasher compared to a photocopier.<\/p><p>But the cortex is different: <a href=\"https:\/\/pubmed.ncbi.nlm.nih.gov\/2299388\/\">areas<\/a> <em>start learning<\/em> their tasks, beginning in infancy. We know that this ability to learn is powerful and general, given the existence of cortical areas such as the \u201cvisual word form area,\u201d which specializes in reading \u2014 a skill that emerged <a href=\"https:\/\/www.hachettebookgroup.com\/titles\/morten-h-christiansen\/the-language-game\/9781541674981\/\">far too recently<\/a> in human history to have evolved through natural selection. Our cortex did not <em>evolve<\/em> to read, but it can <em>learn<\/em> to. Each cortical area, having implemented the same general \u201clearning algorithm,\u201d is best thought of not as an appliance with a predetermined function but as a human expert who has learned a particular domain.<\/p><p>This \u201csocial cortex\u201d perspective emphasizes the lack of a homunculus or CPU in your brain where \u201cyou\u201d reside; the brain is more like a community. <span style=\"box-sizing: border-box; margin: 0px; padding: 0px;\">Its ability to function coherently without central coordination thus depends not only on the ability of each region to perform its specialized task but also on the ability of these regions to model&nbsp;<em>each other&nbsp;<\/em>\u2014 just as people need theory of mind to form relationships and larger social units.<\/span><\/p><p>Do brain regions themselves function as communities of even smaller parts? We believe so. Cortical circuits are built of neurons that not only perform specialized tasks but also appear to <a href=\"https:\/\/www.nature.com\/articles\/s41467-023-40651-w\">learn to model<\/a> neighboring neurons. This mirrors the familiar quip, \u201cturtles all the way down\u201d (a nod to the idea of infinite regress), suggesting that intelligence is best understood as a \u201csocial fractal\u201d rather than a single, monolithic entity.<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;Do brain regions themselves function as communities of even smaller parts? We believe so.&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/81753\"\n        data-a2a-title='\"Do brain regions themselves function as communities of even smaller parts? We believe so.\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>It may also be \u201cturtles all the way up.\u201d As brains become bigger, individuals can become smarter; and as individuals become more numerous, societies can become <a href=\"https:\/\/press.princeton.edu\/books\/paperback\/9780691178431\/the-secret-of-our-success\">smarter<\/a>. There is a curious feedback loop between scales here, as we could only have formed larger societies by growing our brains to model others, and our brains themselves appear to have grown larger through an analogous internal division of cognitive labor.<\/p><p>AI models appear to obey the same principle. Researchers have popularized the idea of \u201cscaling laws\u201d relating model size (and amount of training data) with model capability. To a first approximation, bigger models are <a href=\"https:\/\/arxiv.org\/abs\/2206.07682\">smarter<\/a>, just as bigger brains are smarter. And like brains, AI models are also modular. In fact, many rely on explicitly training a tightly knit \u201ccollective\u201d of specialized sub-models, known as a \u201c<a href=\"https:\/\/www.jmlr.org\/papers\/v23\/21-0998.html\">Mixture of Experts<\/a>.\u201d Furthermore, even big, monolithic models exhibit \u201c<a href=\"https:\/\/arxiv.org\/abs\/2310.10908\">emergent modularity<\/a>\u201d \u2014 they, too, scale by learning how to partition themselves into specialized modules that can divide and conquer.<\/p><p>Thinking about intelligence in terms of sociality and the division of cognitive labor across many simultaneous scales represents a profound paradigm shift. It encourages us to explore AI architectures that look more like growing social networks rather than static, ever-larger monolithic models. It will also be essential to allow models (and sub-models) to progressively specialize, forming long-running collaborations with humans and with each other.<\/p><p>Any of the 1,000-some experts who contributed to Humanity\u2019s Last Exam knows that you can learn only so much from the internet. Beyond that frontier, learning is inseparable from action and interaction. The knowledge frontier expands when those new learnings are shared \u2014 whether they arise from scientific experimentation, discussion or extended creative thinking offline (which, perhaps, amounts to discussion with oneself).<\/p><p>In today\u2019s approach to frontier AI, existing human output is aggregated and distilled into a single giant \u201cfoundation model\u201d whose weights are subsequently frozen. But AI models are poised to become increasingly autonomous and agentive, including by employing or interacting with other agents. AIs are already helpful in brief, focused interactions. But if we want them to aid in the larger project of expanding the frontiers of collective human knowledge and capability, we must enable them to learn and diversify interactively and continually, as we do.<\/p><p>This is sure to alarm some, as it opens the door to AIs evolving their capabilities open-endedly \u2014 again, as we do. The AI safety community refers to the ability for a model to evolve open-endedly as \u201c<a href=\"https:\/\/arxiv.org\/abs\/1906.01820\">mesa optimization<\/a>,\u201d and sees this as a threat. However, we have <a href=\"https:\/\/arxiv.org\/abs\/2309.05858\">discovered<\/a> that even today\u2019s AI models are mesa optimizers because prediction inherently involves learning on the fly; that\u2019s what a chatbot does when instructed to perform a novel task. It works because, even if the chatbot\u2019s neural network weights are frozen, every output makes use of the entire \u201ccontext window\u201d containing the chat transcript so far. Still, current chatbots suffer a kind of amnesia. They are generally unable to retain their learnings beyond the context of a chat session or sessions. Google\u2019s development of \u201c<a href=\"https:\/\/arxiv.org\/abs\/2404.07143\">Infini-attention<\/a>\u201d and <a href=\"https:\/\/arxiv.org\/abs\/2501.00663\">long-term memory<\/a>, both of which compress older material to allow effectively unbounded context windows, are significant recent advances in this area.The social view of intelligence offers new perspectives not only on AI engineering, but also on some longstanding problems in philosophy, such as the \u201chard problem\u201d of consciousness. If we understand consciousness to mean our clear sense of ourselves as entities with our own experiences, inner lives and agency, its emergence is no mystery. We form <a href=\"https:\/\/global.oup.com\/academic\/product\/consciousness-and-the-social-brain-9780199928644\">models of \u201cselves\u201d<\/a> because we live in a social environment full of \u201cselves,\u201d whose thoughts and feelings we must constantly predict using theory of mind. Of course, we need to understand that <em>we<\/em> are a \u201cself\u201d too, not only because our own past, present and future experiences are highly salient, but because our models of others include <em>their<\/em> models of <em>us<\/em>!<\/p><p>Empirical tests to diagnose deficits in theory of mind have existed for decades. When we run these tests on LLMs, we find, unsurprisingly, that they perform about <a href=\"https:\/\/arxiv.org\/abs\/2405.18870\">as well as humans do<\/a>. After all, \u201cselves\u201d and theory-of-mind tasks feature prominently in the stories, dialogues and comment threads LLMs are trained on. We rely on theory of mind in our chatbots, too. In every chat, the AI must not only model us but also maintain a model of itself as a friendly, helpful assistant, and a model of our model of it \u2014 and so on.&nbsp;<\/p><h2 class=\"wp-block-heading\" id=\"h-beyond-ai-development-as-usual\">Beyond AI Development As Usual<\/h2><p>After decades of meager AI progress, we are now rapidly advancing toward systems capable not just of echoing individual human intelligence, but of extending our collective more-than-human intelligence. We are both excited and hopeful about this rapid progress, while acknowledging that it is a moment of momentous paradigm change, attended, as always, by anxiety, debate, upheaval \u2014&nbsp;and many considerations that we must <a href=\"https:\/\/www.digitalistpapers.com\/essays\/getting-ai-right\">get right<\/a>.<\/p><p>At such times, we must prioritize not only technical advances, but knight moves that, as in chess, combine such advances with sideways steps into adjacent fields or paradigms to discover rich new intellectual territory, rethink our assumptions and reimagine our foundations. New paradigms will be needed to develop intelligence that will benefit humanity, advance science, and ultimately help us understand ourselves \u2014 as individuals, as ecologies of smaller intelligences and as constituents of larger wholes.<\/p><p><em>The views expressed in this essay are those of the authors and do not necessarily reflect those of Google or Alphabet.<\/em><\/p>\n          <div class=\"eos-subscribe-push\">\n          \n            <a target=\"https:\/\/shop.noemamag.com\/?utm_source=BottomCTA&utm_medium=website\" href=\"https:\/\/shop.noemamag.com\/?utm_source=BottomCTA&utm_medium=website\" data-wpel-link=\"internal\">Enjoy the read? Subscribe to get the best of Noema.<\/a>\n            \n          <\/div>\n        ","protected":false},"excerpt":{"rendered":"","protected":false},"author":3610,"featured_media":81754,"template":"","wpm-article-type":[3],"wpm-article-topic":[21,11,39,23,20],"wpm-article-tag":[],"class_list":["post-81753","wpm-article","type-wpm-article","status-publish","has-post-thumbnail","hentry","wpm-article-type-essay","wpm-article-topic-digital-society","wpm-article-topic-future-of-capitalism","wpm-article-topic-geopolitics-globalization","wpm-article-topic-philosophy-culture","wpm-article-topic-technology-and-the-human"],"acf":[],"apple_news_notices":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.0 (Yoast SEO v25.0) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>AI Is Evolving \u2014 And Changing Our Understanding Of Intelligence<\/title>\n<meta name=\"description\" content=\"Advances in AI are making us reconsider what intelligence is and giving us clues to unlocking AI\u2019s full potential.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.noemamag.com\/ai-is-evolving-and-changing-our-understanding-of-intelligence\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"AI Is Evolving \u2014\u00a0And Changing Our Understanding Of Intelligence\" \/>\n<meta property=\"og:description\" content=\"Advances in AI are making us reconsider what intelligence is and giving us clues to unlocking AI\u2019s full potential.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.noemamag.com\/ai-is-evolving-and-changing-our-understanding-of-intelligence\/\" \/>\n<meta property=\"og:site_name\" content=\"NOEMA\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/NoemaMag\" \/>\n<meta property=\"article:modified_time\" content=\"2025-06-16T16:45:51+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/noemamag.imgix.net\/2025\/04\/NoemaFinal_Flat947x1186.jpg?fm=pjpg&ixlib=php-3.3.1&s=d6ef26fb5b8b41fd5e568120a0e65360\" \/>\n\t<meta property=\"og:image:width\" content=\"947\" \/>\n\t<meta property=\"og:image:height\" content=\"1186\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:image\" content=\"https:\/\/noemamag.imgix.net\/2025\/04\/Noema-Twitter-Card-Vertical-Template-2025-04-08T103555.509.png?fm=png&ixlib=php-3.3.1&s=4ccd023a84183f6365159736d69d1ab0\" \/>\n<meta name=\"twitter:site\" content=\"@NoemaMag\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"34 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.noemamag.com\/ai-is-evolving-and-changing-our-understanding-of-intelligence\/\",\"url\":\"https:\/\/www.noemamag.com\/ai-is-evolving-and-changing-our-understanding-of-intelligence\/\",\"name\":\"AI Is Evolving \u2014 And Changing Our Understanding Of Intelligence\",\"isPartOf\":{\"@id\":\"https:\/\/www.noemamag.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.noemamag.com\/ai-is-evolving-and-changing-our-understanding-of-intelligence\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.noemamag.com\/ai-is-evolving-and-changing-our-understanding-of-intelligence\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/noemamag.imgix.net\/2025\/04\/NoemaFinal_Flat947x1186.jpg?fm=pjpg&ixlib=php-3.3.1&s=d6ef26fb5b8b41fd5e568120a0e65360\",\"datePublished\":\"2025-04-08T15:52:37+00:00\",\"dateModified\":\"2025-06-16T16:45:51+00:00\",\"description\":\"Advances in AI are making us reconsider what intelligence is and giving us clues to unlocking AI\u2019s full potential.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.noemamag.com\/ai-is-evolving-and-changing-our-understanding-of-intelligence\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.noemamag.com\/ai-is-evolving-and-changing-our-understanding-of-intelligence\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.noemamag.com\/ai-is-evolving-and-changing-our-understanding-of-intelligence\/#primaryimage\",\"url\":\"https:\/\/noemamag.imgix.net\/2025\/04\/NoemaFinal_Flat947x1186.jpg?fm=pjpg&ixlib=php-3.3.1&s=d6ef26fb5b8b41fd5e568120a0e65360\",\"contentUrl\":\"https:\/\/noemamag.imgix.net\/2025\/04\/NoemaFinal_Flat947x1186.jpg?fm=pjpg&ixlib=php-3.3.1&s=d6ef26fb5b8b41fd5e568120a0e65360\",\"width\":947,\"height\":1186,\"caption\":\"Kate Banazi for Noema Magazine\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.noemamag.com\/ai-is-evolving-and-changing-our-understanding-of-intelligence\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.noemamag.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"AI Is Evolving \u2014\u00a0And Changing Our Understanding Of Intelligence\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.noemamag.com\/#website\",\"url\":\"https:\/\/www.noemamag.com\/\",\"name\":\"NOEMA\",\"description\":\"Noema Magazine\",\"publisher\":{\"@id\":\"https:\/\/www.noemamag.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.noemamag.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.noemamag.com\/#organization\",\"name\":\"NOEMA\",\"url\":\"https:\/\/www.noemamag.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.noemamag.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/noemamag.imgix.net\/2023\/11\/noema-logo.png?fm=png&ixlib=php-3.3.1&s=5f5be9b261a7cf7e336f6f6beea6e539\",\"contentUrl\":\"https:\/\/noemamag.imgix.net\/2023\/11\/noema-logo.png?fm=png&ixlib=php-3.3.1&s=5f5be9b261a7cf7e336f6f6beea6e539\",\"width\":305,\"height\":69,\"caption\":\"NOEMA\"},\"image\":{\"@id\":\"https:\/\/www.noemamag.com\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/NoemaMag\",\"https:\/\/x.com\/NoemaMag\"]}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"AI Is Evolving \u2014 And Changing Our Understanding Of Intelligence","description":"Advances in AI are making us reconsider what intelligence is and giving us clues to unlocking AI\u2019s full potential.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.noemamag.com\/ai-is-evolving-and-changing-our-understanding-of-intelligence\/","og_locale":"en_US","og_type":"article","og_title":"AI Is Evolving \u2014\u00a0And Changing Our Understanding Of Intelligence","og_description":"Advances in AI are making us reconsider what intelligence is and giving us clues to unlocking AI\u2019s full potential.","og_url":"https:\/\/www.noemamag.com\/ai-is-evolving-and-changing-our-understanding-of-intelligence\/","og_site_name":"NOEMA","article_publisher":"https:\/\/www.facebook.com\/NoemaMag","article_modified_time":"2025-06-16T16:45:51+00:00","og_image":[{"width":947,"height":1186,"url":"https:\/\/noemamag.imgix.net\/2025\/04\/NoemaFinal_Flat947x1186.jpg?fm=pjpg&ixlib=php-3.3.1&s=d6ef26fb5b8b41fd5e568120a0e65360","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_image":"https:\/\/noemamag.imgix.net\/2025\/04\/Noema-Twitter-Card-Vertical-Template-2025-04-08T103555.509.png?fm=png&ixlib=php-3.3.1&s=4ccd023a84183f6365159736d69d1ab0","twitter_site":"@NoemaMag","twitter_misc":{"Est. reading time":"34 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.noemamag.com\/ai-is-evolving-and-changing-our-understanding-of-intelligence\/","url":"https:\/\/www.noemamag.com\/ai-is-evolving-and-changing-our-understanding-of-intelligence\/","name":"AI Is Evolving \u2014 And Changing Our Understanding Of Intelligence","isPartOf":{"@id":"https:\/\/www.noemamag.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.noemamag.com\/ai-is-evolving-and-changing-our-understanding-of-intelligence\/#primaryimage"},"image":{"@id":"https:\/\/www.noemamag.com\/ai-is-evolving-and-changing-our-understanding-of-intelligence\/#primaryimage"},"thumbnailUrl":"https:\/\/noemamag.imgix.net\/2025\/04\/NoemaFinal_Flat947x1186.jpg?fm=pjpg&ixlib=php-3.3.1&s=d6ef26fb5b8b41fd5e568120a0e65360","datePublished":"2025-04-08T15:52:37+00:00","dateModified":"2025-06-16T16:45:51+00:00","description":"Advances in AI are making us reconsider what intelligence is and giving us clues to unlocking AI\u2019s full potential.","breadcrumb":{"@id":"https:\/\/www.noemamag.com\/ai-is-evolving-and-changing-our-understanding-of-intelligence\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.noemamag.com\/ai-is-evolving-and-changing-our-understanding-of-intelligence\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.noemamag.com\/ai-is-evolving-and-changing-our-understanding-of-intelligence\/#primaryimage","url":"https:\/\/noemamag.imgix.net\/2025\/04\/NoemaFinal_Flat947x1186.jpg?fm=pjpg&ixlib=php-3.3.1&s=d6ef26fb5b8b41fd5e568120a0e65360","contentUrl":"https:\/\/noemamag.imgix.net\/2025\/04\/NoemaFinal_Flat947x1186.jpg?fm=pjpg&ixlib=php-3.3.1&s=d6ef26fb5b8b41fd5e568120a0e65360","width":947,"height":1186,"caption":"Kate Banazi for Noema Magazine"},{"@type":"BreadcrumbList","@id":"https:\/\/www.noemamag.com\/ai-is-evolving-and-changing-our-understanding-of-intelligence\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.noemamag.com\/"},{"@type":"ListItem","position":2,"name":"AI Is Evolving \u2014\u00a0And Changing Our Understanding Of Intelligence"}]},{"@type":"WebSite","@id":"https:\/\/www.noemamag.com\/#website","url":"https:\/\/www.noemamag.com\/","name":"NOEMA","description":"Noema Magazine","publisher":{"@id":"https:\/\/www.noemamag.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.noemamag.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.noemamag.com\/#organization","name":"NOEMA","url":"https:\/\/www.noemamag.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.noemamag.com\/#\/schema\/logo\/image\/","url":"https:\/\/noemamag.imgix.net\/2023\/11\/noema-logo.png?fm=png&ixlib=php-3.3.1&s=5f5be9b261a7cf7e336f6f6beea6e539","contentUrl":"https:\/\/noemamag.imgix.net\/2023\/11\/noema-logo.png?fm=png&ixlib=php-3.3.1&s=5f5be9b261a7cf7e336f6f6beea6e539","width":305,"height":69,"caption":"NOEMA"},"image":{"@id":"https:\/\/www.noemamag.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/NoemaMag","https:\/\/x.com\/NoemaMag"]}]}},"parsely":{"version":"1.1.0","canonical_url":"https:\/\/noemamag.com\/ai-is-evolving-and-changing-our-understanding-of-intelligence","smart_links":{"inbound":0,"outbound":0},"traffic_boost_suggestions_count":0,"meta":{"@context":"https:\/\/schema.org","@type":"NewsArticle","headline":"AI Is Evolving \u2014\u00a0And Changing Our Understanding Of Intelligence","url":"http:\/\/www.noemamag.com\/ai-is-evolving-and-changing-our-understanding-of-intelligence","mainEntityOfPage":{"@type":"WebPage","@id":"http:\/\/www.noemamag.com\/ai-is-evolving-and-changing-our-understanding-of-intelligence"},"thumbnailUrl":"https:\/\/noemamag.imgix.net\/2025\/04\/NoemaFinal_Flat947x1186.jpg?fit=crop&fm=pjpg&h=150&ixlib=php-3.3.1&w=150&wpsize=thumbnail&s=55ab768867f58842d971783b5a93bfaa","image":{"@type":"ImageObject","url":"https:\/\/noemamag.imgix.net\/2025\/04\/NoemaFinal_Flat947x1186.jpg?fm=pjpg&ixlib=php-3.3.1&s=d6ef26fb5b8b41fd5e568120a0e65360"},"articleSection":"Uncategorized","author":[{"@type":"Person","name":"Blaise Ag\u00fcera y Arcas"}],"creator":["Blaise Ag\u00fcera y Arcas"],"publisher":{"@type":"Organization","name":"NOEMA","logo":"https:\/\/www.noemamag.com\/wp-content\/uploads\/2020\/06\/cropped-ms-icon-310x310-1.png"},"keywords":[],"dateCreated":"2025-04-08T15:52:37Z","datePublished":"2025-04-08T15:52:37Z","dateModified":"2025-06-16T16:45:51Z"},"rendered":"<script type=\"application\/ld+json\" class=\"wp-parsely-metadata\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@type\":\"NewsArticle\",\"headline\":\"AI Is Evolving \\u2014\\u00a0And Changing Our Understanding Of Intelligence\",\"url\":\"http:\\\/\\\/www.noemamag.com\\\/ai-is-evolving-and-changing-our-understanding-of-intelligence\",\"mainEntityOfPage\":{\"@type\":\"WebPage\",\"@id\":\"http:\\\/\\\/www.noemamag.com\\\/ai-is-evolving-and-changing-our-understanding-of-intelligence\"},\"thumbnailUrl\":\"https:\\\/\\\/noemamag.imgix.net\\\/2025\\\/04\\\/NoemaFinal_Flat947x1186.jpg?fit=crop&fm=pjpg&h=150&ixlib=php-3.3.1&w=150&wpsize=thumbnail&s=55ab768867f58842d971783b5a93bfaa\",\"image\":{\"@type\":\"ImageObject\",\"url\":\"https:\\\/\\\/noemamag.imgix.net\\\/2025\\\/04\\\/NoemaFinal_Flat947x1186.jpg?fm=pjpg&ixlib=php-3.3.1&s=d6ef26fb5b8b41fd5e568120a0e65360\"},\"articleSection\":\"Uncategorized\",\"author\":[{\"@type\":\"Person\",\"name\":\"Blaise Ag\\u00fcera y Arcas\"}],\"creator\":[\"Blaise Ag\\u00fcera y Arcas\"],\"publisher\":{\"@type\":\"Organization\",\"name\":\"NOEMA\",\"logo\":\"https:\\\/\\\/www.noemamag.com\\\/wp-content\\\/uploads\\\/2020\\\/06\\\/cropped-ms-icon-310x310-1.png\"},\"keywords\":[],\"dateCreated\":\"2025-04-08T15:52:37Z\",\"datePublished\":\"2025-04-08T15:52:37Z\",\"dateModified\":\"2025-06-16T16:45:51Z\"}<\/script>","tracker_url":"https:\/\/cdn.parsely.com\/keys\/noemamag.com\/p.js"},"_links":{"self":[{"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/81753","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article"}],"about":[{"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/types\/wpm-article"}],"author":[{"embeddable":true,"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/users\/3610"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/media\/81754"}],"wp:attachment":[{"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/media?parent=81753"}],"wp:term":[{"taxonomy":"wpm-article-type","embeddable":true,"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article-type?post=81753"},{"taxonomy":"wpm-article-topic","embeddable":true,"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article-topic?post=81753"},{"taxonomy":"wpm-article-tag","embeddable":true,"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article-tag?post=81753"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}