{"id":86811,"date":"2026-01-20T17:32:59","date_gmt":"2026-01-20T17:32:59","guid":{"rendered":"https:\/\/www.noemamag.com"},"modified":"2026-01-21T00:26:49","modified_gmt":"2026-01-21T00:26:49","slug":"when-ai-human-worlds-collide","status":"publish","type":"wpm-article","link":"https:\/\/www.noemamag.com\/when-ai-human-worlds-collide","title":{"rendered":"When AI &amp; Human Worlds Collide"},"content":{"rendered":"<p>A robot is learning to make sushi in Kyoto. Not in a sushi-ya, but in a dream. It practices the subtle art of pressing nigiri into form inside its neural network, watching rice grains yield to its grip. It rotates its wrist 10,000 times in an attempt to keep the nori taut around a maki roll. Each failure teaches it something about the dynamics of the world. When its aluminum fingers finally touch rice grains, it already knows how much pressure they can bear.<\/p><div>\n    <iframe loading=\"lazy\" id=\"noa-web-audio-player\"\n            style=\"border: none\"\n            src=\"https:\/\/embed-player.newsoveraudio.com\/v4?key=n0e13g&#038;id=https:\/\/www.noemamag.com\/when-ai-human-worlds-collide\/&#038;bgColor=F3F3F3&#038;color=6D6D6D&#038;progressBgColor=F7F7F7&#038;progressBorderColor=6D6D6D&#038;playColor=F3F3F3&#038;titleColor=383D3D&#038;timeColor=6D6D6D&#038;speedColor=6D6D6D&#038;noaLinkColor=6D6D6D&#038;noaLinkHighlightColor=039BE5\"\n            width=\"100%\" height=\"110px\"><\/iframe>\n<\/div><p>This is the promise of <em>world models<\/em>. For years, artificial intelligence has been defined by its ability to process and translate information \u2014 to autocomplete, recommend and generate. But a different AI paradigm seeks to expand its capabilities further. World models are systems that simulate how environments behave. They provide spaces where AI agents can predict how the future might unfold, experiment with cause and effect, and, one day, use the logic they acquire to make decisions in our physical environments.&nbsp;<\/p><p>Large language models currently have the attention of both the AI industry and the wider public, showing remarkable and diverse capabilities. Their multimodal variants can generate exquisite sushi recipes and describe Big Ben\u2019s physical properties solely from a photograph. They guide agents through game environments with increasing sophistication; more recent models can even integrate vision, language and action to direct robot movements through physical space.<\/p><p>Their rise, however, unfolds against a fierce debate over whether these models can yield more human-like and general intelligence simply by continuing to scale them through investing in their parameters, data and compute.<\/p><p>While this debate is not yet settled, some believe that fundamentally new architectures are required to unlock AI&#8217;s full potential. World models present one such different approach. Rather than interacting primarily with language and media patterns, world models create environments that allow AI agents to learn through simulation and experience. These worlds enable agents to test <em>\u201cwhat happens if I do this?\u201d<\/em> by counterfactually experimenting with cause and effect to hone how they perform their actions based on their outcomes.<\/p><p>To understand world models, it helps to distinguish between two related concepts: AI models and AI agents. AI models are machine learning algorithms that learn statistical patterns from training data, enabling them to make predictions or generate outputs. Generative AI models are AI models capable of generating new content, which is then integrated into systems that users can interact with, from chatbots like ChatGPT to video generators like Veo. AI agents, by contrast, are systems that use such models to act autonomously in different environments. Coding agents, for example, can perform programming tasks while using digital tools. The abundance of digital data makes training such agents feasible for digital tasks, but enabling them to act in the physical world remains a harder challenge.<\/p><p>World models are an emerging type of such AI models that agents can use to learn how to act in an environment. They take two distinct forms. Internal world models are abstract representations that live within an AI agent\u2019s architecture, serving as compressed mental simulations for planning. What can be called interactive world models, on the other hand, generate rich, explorable environments that any user can explore, and agents can train within.<\/p><p>The aspiration behind world models is to move from generating content to simulating dynamics. Rather than providing the steps to a recipe, they seek to simulate how rice responds to pressure, enabling agents to learn the act of pressing sushi. The ultimate goal is to develop world models that simulate aspects of the real world accurately enough for agents to learn from and ultimately act within them. Yet this ambition to represent the underlying dynamics of the world rather than the surface patterns of language or media may prove to be a far greater challenge, given the staggering complexity of reality.<\/p><h2 class=\"wp-block-heading\" id=\"h-our-own-world-models\">Our Own World Models<\/h2><p>Since their conceptual origins decades ago, world models have become a promising AI frontier. Many of the thinkers shaping modern AI \u2014 including Yann LeCun, Fei-Fei Li, Yoshua Bengio and Demis Hassabis \u2014 have acknowledged that this paradigm could pave new pathways to more human-like intelligence.<\/p><p>To understand why this approach might matter, it helps to take a closer look at how we ourselves came to know the world.<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;Rather than interacting primarily with language and media patterns, world models create environments that allow AI agents to learn through simulation and experience.&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/86811\"\n        data-a2a-title='\"Rather than interacting primarily with language and media patterns, world models create environments that allow AI agents to learn through simulation and experience.\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>Human cognition evolved through contact with our three-dimensional environment, where spatial reasoning contributes to our ability to infer cause and effect. From infancy, we learn through our bodies. By dropping a ball or lifting a pebble, we refine our intuitive&nbsp;sense of gravity, helping us anticipate&nbsp;how other objects might behave. In stacking and toppling blocks, babies begin to grasp the rules of our world, learning by engaging with its physical logic. The causal structure of spatial reality is the fabric upon which human and animal cognition take shape.<\/p><p>The world model approach draws inspiration from biological learning mechanisms, and particularly from how our brains use simulation and prediction. The mammalian prefrontal cortex is central to counterfactual reasoning and goal-directed planning, enabling the brain to simulate, test and update internal representations of the world. World models attempt to reproduce aspects of this capacity synthetically. They draw on what cognitive scientists call &#8220;mental models,&#8221; abstracted internal representations of how things work, shaped by prior perception and experience.<\/p><p>\u201cThe mental image of the world around you which you carry in your head is a model,\u201d pioneering computer engineer Jay Wright Forrester once <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/S004016257180001X?via%3Dihub\">wrote<\/a>. We don&#8217;t carry entire cities or governments in our heads, he continued, but only selected concepts and relationships that we use to represent the real system. World models aim to explicitly provide machines with such representations.<\/p><p>While language models appear to develop <a href=\"https:\/\/news.mit.edu\/2024\/llms-develop-own-understanding-of-reality-as-language-abilities-improve-0814\">some implicit world representations<\/a> through their training, world models take an explicit spatial and temporal approach to these representations. They provide spaces where AI agents can test how environments respond to their actions before executing them in the real world. Through iterative interaction in these simulated spaces, AI agents refine their &#8220;action policies&#8221; \u2014 their internal strategies for how to act. This learning, based on simulating possible futures, may prove particularly valuable for tasks requiring long-horizon planning in complex environments. Where language models shine in recognizing the word that typically comes next, world models enable agents to better predict how an environment might change in response to their actions. Both approaches may prove essential \u2014 one to teach machines about our world, the other to let them rehearse their place within it.<\/p><p>This shift, from pattern recognition to causal prediction, makes world models more than just tools for better gaming and entertainment \u2014 they may be synthetic incubators shaping the intelligence that one day emerges, embodied in our physical world. When predictions become actions, errors carry physical weight. While this vision remains a relatively distant future, the choices we make about the nature of these worlds will influence the ethics of the agents that rely on them.<\/p><h2 class=\"wp-block-heading\" id=\"h-how-machines-construct-worlds\">How Machines Construct Worlds<\/h2><p>Despite its recent resurgence, the idea of world models is not new. In 1943, cybernetics pioneer Kenneth Craik <a href=\"https:\/\/psycnet.apa.org\/record\/1944-00640-000\">proposed<\/a> that organisms carry \u201csmall-scale models\u201d of reality in their heads to predict and evaluate future scenarios. In the 1970s and 1980s, early AI and robotics researchers extended these mental model foundations into computational terms, using the phrase \u201cworld models\u201d to describe a system\u2019s representation of the environment. This early work was mostly theoretical, as researchers lacked the tools we have today.<\/p><p>A 2018 <a href=\"https:\/\/worldmodels.github.io\/\">paper<\/a> by AI researchers David Ha and J\u00fcrgen Schmidhuber \u2014 building on previous work from the 1990s \u2014 offered a compelling demonstration of what world models could achieve. The researchers showed that AI systems can autonomously learn and navigate complex environments using internal world models. They developed a system architecture that learned to play a driving video game solely from the game\u2019s raw pixel data. Perhaps most remarkably, the AI agent could be trained entirely in its \u201cdream world\u201d \u2014 not literal dreams, but training runs in what researchers call a \u201clatent space,\u201d an abstract, compact representation of the game environment. This space serves as a compressed mental sketch of the world where the agent learns to act.&nbsp;<\/p><p>Without world models, agents must learn directly from real experience or pre-existing data. With world models, they can generate their own practice scenarios to distill how they should act in different situations. This internal simulation acts as a predictive engine, giving the agent a form of artificial intuition \u2014 allowing for fast, reflexive decisions without the need to stop and plan. Ha and Schmidhuber likened this to how a baseball batter can instinctively predict the path of a fastball and swing, rather than having to carefully analyze every possible trajectory.<\/p><p>This breakthrough was followed by a wave of additional progress, pushing the boundaries of what world models could represent and how far their internal simulations could stretch. Each advancement hinted at a broader shift \u2014 AI agents were beginning to learn from their own internally generated experience.<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;The world model approach draws inspiration from biological learning mechanisms, and particularly from how our brains use simulation and prediction.&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/86811\"\n        data-a2a-title='\"The world model approach draws inspiration from biological learning mechanisms, and particularly from how our brains use simulation and prediction.\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>Recently, another significant development in AI raised new questions about how agents might learn about the real world. Breakthroughs in <em>video generation<\/em> models led to the scaled production of videos that seemed to capture subtle real-world physics. Online, users <a href=\"https:\/\/www.reddit.com\/r\/singularity\/comments\/1hgoesi\/veo_physics_understanding_is_crazy_look_at_the\/\">admired<\/a> tiny details in those videos: <a href=\"https:\/\/x.com\/shlomifruchter\/status\/1868974877904191917\">blueberries<\/a> plunging into water and releasing airy bubbles, <a href=\"https:\/\/www.reddit.com\/r\/singularity\/comments\/1hfw1vg\/this_is_one_of_the_most_impressive_ai_generated\/\">tomatoes<\/a> slicing thinly under the glide of a knife. As people shared and marveled at these videos, something deeper was happening beneath the surface. To generate such videos, models reflect patterns that seem consistent with physical laws, such as fluid dynamics and gravity. This led researchers <a href=\"https:\/\/arxiv.org\/pdf\/2405.03520\">to wonder<\/a> if these models were not just generating clips but beginning to simulate how the world works. In early 2024, OpenAI itself <a href=\"https:\/\/openai.com\/index\/video-generation-models-as-world-simulators\/\">hypothesized<\/a> that advances in video generation may offer a promising path toward highly capable world simulators.&nbsp;<\/p><p>Whether or not AI models that generate video qualify as world simulators, advances in generative modeling helped trigger a pivotal shift in world models themselves. Until recently, world models lived entirely inside the system\u2019s architecture \u2014 latent spaces only for the agent\u2019s own use. But the breakthroughs in generative AI of recent years have made it possible to build interactive world models \u2014 worlds you can actually see and experience. These systems take text prompts (\u201cgenerate 17th-century London\u201d) or other inputs (a photo of your living room) to generate entire three-dimensional interactive worlds. While video-generating models can depict the world, interactive world models instantiate the world, allowing users or agents to interact with it and affect what happens rather than simply watching things unfold.<\/p><p>Major AI labs are now investing heavily in these interactive world models, with some showing signs of deployment maturity, though approaches vary. Google DeepMind\u2019s Genie series turns text prompts into striking, diverse, interactive digital worlds that continuously evolve in real time \u2014 using internal latent representations to predict dynamics and render them into explorable environments, some of which appear real-world-like in both visual fidelity and physical dynamics. Fei-Fei Li\u2019s World Labs <a href=\"https:\/\/techcrunch.com\/2025\/11\/12\/fei-fei-lis-world-labs-speeds-up-the-world-model-race-with-marble-its-first-commercial-product\/\">recently released<\/a> Marble, which takes a different approach, letting users transform various inputs into editable and downloadable environments. Runway, a company known for its video generation models, <a href=\"https:\/\/techcrunch.com\/2025\/12\/11\/runway-releases-its-first-world-model-adds-native-audio-to-latest-video-model\/\">recently launched<\/a> GWM-1, a world model family that includes explorable environments and robotics, where simulated scenarios can be used to train robot behavior.<\/p><p>Some researchers, however, are skeptical that generating visuals, or pixels, will lead anywhere useful for agent planning. Many believe that world models should predict in compressed, abstract representations without generating pixels \u2014 much as we might predict that dropping a cup will cause it to break without mentally rendering every shard of glass. <\/p><p>LeCun, who recently <a href=\"https:\/\/www.threads.com\/@yannlecun\/post\/DRQL7_0DmDI?xmt=AQF0aTQnHZw6zEfTzw_XuebVreVHPzcZEaD4WWB7UTRilw\">announced<\/a> his departure from Meta to launch Advanced Machine Intelligence, a company focused on world models, <a href=\"https:\/\/x.com\/ylecun\/status\/1759486703696318935\">has been critical<\/a> of approaches that rely on generating pixels for prediction and planning, arguing that they are \u201cdoomed to failure.\u201d According to his view, visually reconstructing such complex environments is \u201cintractable\u201d because it tries to model highly unpredictable phenomena, wasting resources on irrelevant details. While researchers debate the optimal path forward, the functional result remains that machines are beginning to learn something about world dynamics from synthetic experience.&nbsp;<\/p><p>World models are impressive in their own right and offer various applications. In gaming, for instance, interactive world models may soon be used to help generate truly open worlds \u2014 environments that uniquely evolve with a player\u2019s choices rather than relying on scripted paths. As someone who grew up immersed in \u201copen world\u201d games of past decades, I relished the thrill of their apparent freedom. Yet even these gaming worlds were always finite, their characters repeating the same lines. Interactive world models bring closer the prospect of worlds that don\u2019t just feel alive but behave as if they are.&nbsp;<\/p><h2 class=\"wp-block-heading\" id=\"h-toward-physical-embodiment\">Toward Physical Embodiment<\/h2><p>Gaming, however, is merely a steppingstone. The transformative promise of world models lies in physical embodiment and reasoning \u2014 AI agents that can navigate our world, rather than just virtual ones. The concept of embodiment is central to cognitive science, which holds that our bodies and sensorimotor capacities shape our cognition. In 1945, French philosopher Maurice Merleau-Ponty <a href=\"https:\/\/www.routledge.com\/Phenomenology-of-Perception\/Merleau-Ponty\/p\/book\/9780415834339\">observed<\/a>: \u201cthe body is our general medium for having a world.\u201d We <em>are<\/em> our body, he argued. We don\u2019t <em>have<\/em> a body. In its AI recasting, embodiment refers to systems situated in physical or digital spaces, using some form of body and perception to interact with both users and their surroundings.&nbsp;<\/p><p>Physically embodied AI offers endless new deployment possibilities, from wearable companions to robotics. But it runs up against a stubborn barrier \u2014 the real world is hard to learn from. The internet flooded machine learning with text, images and video, creating the digital abundance that served as the bedrock for language models and other generative AI systems.<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;While video-generating models can depict the world, interactive world models instantiate the world, allowing users or agents to interact with it and affect what happens.&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/86811\"\n        data-a2a-title='\"While video-generating models can depict the world, interactive world models instantiate the world, allowing users or agents to interact with it and affect what happens.\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>Physical data, however, is different. It is scarce, expensive to capture and constrained by the fact that it must be gathered through real actions unfolding in real time. Training partially capable robots in the real world, and outside of lab settings, might lead to dangerous consequences. To be useful, physical data also needs to be diverse enough to fit the messy particulars of reality. A robot that learns to load plates into a dishwasher in one kitchen learns little about how to handle a saucepan in another. Every environment is different. Every skill must be learned in its own corner of reality, one slow interaction at a time.<\/p><p>World models offer a way through this conundrum. By generating rich, diverse and responsive environments, they create rehearsal space for physically embodied systems \u2014 places where robots can learn from the experiences of a thousand lifetimes in a fraction of the time, without ever touching the physical world. This promise is taking its first steps toward reality.<\/p><p>In just the past few years, significant applications of world models in robotics have emerged. Nvidia <a href=\"https:\/\/nvidianews.nvidia.com\/news\/nvidia-launches-cosmos-world-foundation-model-platform-to-accelerate-physical-ai-development\">unveiled<\/a> a world model platform that helps developers build customized world models for their physical AI setups. Meta\u2019s world models <a href=\"https:\/\/arxiv.org\/abs\/2506.09985\">have demonstrated<\/a> concrete robotics capabilities, guiding robots to perform tasks such as grasping objects and moving them to new locations in environments they were never trained in. Google DeepMind and <a href=\"https:\/\/runwayml.com\/research\/introducing-runway-gwm-1\">Runway<\/a> have shown that world models can serve robotics \u2014 whether by testing robot behavior or generating training scenarios. The AI and robotics company 1X <a href=\"https:\/\/www.forbes.com\/sites\/erikkain\/2025\/10\/29\/this-20000-neo-robot-will-clean-your-home-but-theres-a-catch-and-its-kind-of-terrifying\/\">grabbed global attention<\/a> when it released a <a href=\"https:\/\/www.youtube.com\/watch?v=LTYMWadOW7c\">demo<\/a> of its humanoid home assistant tidying shelves and outlining its various capabilities, such as suggesting meals based on the contents of a fridge. Though their robot is currently teleoperated with human involvement, its every interaction captures physically embodied data that feeds back into the <a href=\"https:\/\/www.humanoidsdaily.com\/news\/1x-reveals-its-world-model-a-digital-twin-to-accelerate-humanoid-ai-training\">1X world model<\/a>, enabling it to learn from real-world data to improve its accuracy and quality.<\/p><p>But alongside advancements in world models, the other half of this story lies with the AI agents themselves. In a <a href=\"https:\/\/www.nature.com\/articles\/s41586-025-08744-2\">2025 Nature article<\/a>, the Dreamer agent demonstrated the ability to collect diamonds in Minecraft without relying on human data or demonstration; instead, it derived its strategy solely from the logic of the environment by repeatedly testing what worked there, as if feeling its way toward competence from first principles. Elsewhere, recent work from Google DeepMind hints at what a new kind of general AI agent might look like. By learning from diverse video games, its language model-based SIMA agent translates language into action in three-dimensional worlds. Tell SIMA to \u201cclimb the ladder,\u201d and it complies, performing actions even in games it&#8217;s never seen. A new version of this agent has recently shown its ability to self-learn, even in worlds generated by the world model Genie.\n          <div class=\"eos-subscribe-push\">\n            \n            <a target=\"https:\/\/shop.noemamag.com\/?utm_source=MiddleCTA&utm_medium=website\" href=\"https:\/\/shop.noemamag.com\/?utm_source=MiddleCTA&utm_medium=website\" data-wpel-link=\"internal\">Read Noema in print.<\/a>\n            \n          <\/div>\n        <\/p><p>In essence, two lines of progress are beginning to meet. On one side, AI agents that learn to navigate and self-improve in any three-dimensional digital environment; on the other, systems that simulate endless, realistic three-dimensional worlds or their abstracted dynamics, with which agents can interact. Together, they may provide the unprecedented capability to run virtually endless simulations in which agents can refine their abilities across variations of experience. If these systems keep advancing, the agents shaped within such synthetic worlds may eventually become capable enough to be embodied in our physical one. In this sense, world models could incubate agents to hone their basic functions before taking their first steps into reality.<\/p><p>As world models move from the research frontier into early production, their concrete deployment pathways remain largely uncertain. Their near-term horizon in gaming is becoming clear, while the longer horizon of broad robotics deployment still requires significant technical breakthroughs in architectures, data, physical machinery and compute. But it is increasingly plausible that an intermediate stage will emerge \u2014 world models embedded in wearable devices and ambient AI companions that use spatial intelligence to guide users through their environment. Much like the 1X humanoid assistant guiding residents through their fridge, world-model-powered AI could one day mediate how people perceive, move through and make decisions within their everyday environments.<\/p><h2 class=\"wp-block-heading\" id=\"h-the-collingridge-dilemma\">The Collingridge Dilemma<\/h2><p>Whether world models ultimately succeed through pixel-level generation or more abstract prediction, their underlying paradigm shift \u2014 from modeling content to modeling dynamics \u2014 raises questions that transcend any architecture. Beyond the technological promise of world models, their trajectory carries profound implications for how intelligence may take form and how humans may come to interact with it.<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;Much like the 1X humanoid assistant guiding residents through their fridge, world-model-powered AI could one day mediate how people perceive, move through and make decisions within their everyday environments.&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/86811\"\n        data-a2a-title='\"Much like the 1X humanoid assistant guiding residents through their fridge, world-model-powered AI could one day mediate how people perceive, move through and make decisions within their everyday environments.\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>Even if world models never yield human-level intelligence, the shift from systems that model the world through language and media patterns to systems that model it through interactive simulation could fundamentally reshape how we engage with AI and to what end. The societal implications of world modeling capabilities remain largely uncharted as attention from the humanities and social sciences lags behind the pace of computer science progress.<\/p><p>As a researcher in the philosophy of AI \u2014 and having spent more than a decade working in AI governance and policy roles inside frontier AI labs and technology companies \u2014 I\u2019ve observed a familiar pattern: Clarity about the nature of emerging technologies and their societal implications tends to arrive only in retrospect, a problem known as the \u201cCollingridge dilemma.\u201d This dilemma reminds us that by the time a technology\u2019s consequences become visible, it is often too entrenched to change.<\/p><p>We can begin to address this dilemma by bringing conceptual clarity to emerging technologies early, while their designs can still be shaped. World models present such a case. They are becoming mature enough to analyze meaningfully, yet it\u2019s early enough in their development that such analysis could affect their trajectory. Examining their conceptual foundations now \u2014 what these systems represent, how they acquire knowledge, what failure modes they might exhibit \u2014 could help inform crucial aspects of their design.<\/p><h2 class=\"wp-block-heading\" id=\"h-nbsp-a-digital-plato-s-cave\">&nbsp;A Digital Plato\u2019s Cave<\/h2><p>The robot in Los Angeles, learning to make sushi in Kyoto, exists in a peculiar state. It knows aspects of the world without ever directly experiencing them. But what is the content of the robot\u2019s knowledge? How is it formed? Under what conditions can we trust its synthetic world view, once it begins to act in ours?<\/p><p>Beginning to answer these questions reveals important aspects about the nature of world models. Designed to capture the logic of the real world, they draw loose inspiration from human cognition. But they also present a deep asymmetry. Humans learn about reality <em>from<\/em> <em>reality<\/em>. World models learn primarily <em>from representations<\/em> of it \u2014 such as millions of hours of curated videos, distilled into statistical simulacra of the world. What they acquire is not experience itself, but an approximation of it \u2014 a digital <a href=\"https:\/\/www.masterclass.com\/articles\/allegory-of-the-cave-explainede\">Plato\u2019s Cave<\/a>, offering shadows of the world rather than the world itself.&nbsp;&nbsp;<\/p><p>Merleau-Ponty\u2019s argument that we are our body is inverted by world models. They offer AI agents knowledge of embodiment without embodiment itself. In a sense, the sushi-making robot is learning through a body it has never inhabited \u2014 and the nature of that learning brings new failure modes and risks.<\/p><p>Like other AI systems, world models compress these representations of reality into abstract patterns, a process fraught with loss. As semanticist Alfred Korzybski famously <a href=\"https:\/\/unearnedwisdom.com\/the-map-is-not-the-territory-meaning\/\">observed<\/a>, <em>\u201ca map is not the territory<\/em>.<em>\u201d<\/em> World models, both those that generate rich visual environments and those that operate in latent spaces, are still abstractions. They learn statistical approximations of physics from video data, not the underlying laws themselves.<\/p><p>But because world models compress dynamics rather than just content, what gets lost is not just information but physical and causal intuition. A simulated environment may appear physically consistent on its face, while omitting important properties \u2014 rendering water that flows beautifully but lacks viscosity, or metal that bends without appropriate resistance.<\/p><p>AI systems tend to lose the rare and unusual first, often the very situations where safety matters most. A child darting into traffic, a glass shattering at the pour of boiling tea, the unexpected give of rotting wood. These extreme outliers, though rare in training data, become matters of life and safety in the real world. What may remain in the representation of the world model is an environment smothered into routine, blind to critical exceptions.<\/p><p>With these simplified maps, agents may learn to navigate our world. Their compass, however, is predefined \u2014 a reward function that evaluates and shapes their learning. As with other AI reinforcement learning approaches, failing to properly specify a reward <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S2666389922000563\">evokes<\/a> Goodhart\u2019s Law: <em>when a measure becomes a target, it ceases to be a good measure. <\/em>A home cleaning agent that\u2019s rewarded for \u201ctaking out the trash\u201d no longer becomes appealing to its owner if it places the trash in the garden or brings it back in so that it\u2019s rewarded for taking it out again.<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;Because world models compress dynamics rather than just content, what gets lost is not just information but physical and causal intuition.&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/86811\"\n        data-a2a-title='\"Because world models compress dynamics rather than just content, what gets lost is not just information but physical and causal intuition.\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>While traditional simulations are encoded with physical principles, those created by world models learn patterns. In their constructed worlds, pedestrians might open umbrellas because sidewalks are wet, never realizing that rain causes both. A souffl\u00e9 might rise instantly because most cooking videos they&#8217;ve learned from skip the waiting time. Through <a href=\"https:\/\/arxiv.org\/pdf\/2209.13085\">reward hacking<\/a> \u2014 a well-documented problem in reinforcement learning \u2014 agents may discover and exploit quirks that only work in their simulated physics. Like speedrunners \u2014 gamers who hunt for glitches that let them walk through walls or skip levels \u2014 these agents may discover and optimize for shortcuts that fail in reality.&nbsp;<\/p><p>These are old problems dressed in new clothes that transfer the risks of previous AI systems \u2014 <a href=\"https:\/\/spectrum.ieee.org\/ai-failures\">brittleness<\/a>, bias, hallucination \u2014 from information to action. All machine learning abstracts from data. But while language models can hallucinate facts and seem coherent, world models may be wrong about physics and still appear visually convincing. Physical embodiment further transforms the stakes. What once misled may now injure. A misunderstood physics pattern becomes a shattered glass; a misread social cue becomes an uncomfortable interaction.<\/p><p>While humans can consider the outputs of chatbots before acting on them, embodied actions by an AI agent may occur without any human to filter or approve such actions \u2014 like the Waymo car <a href=\"https:\/\/www.latimes.com\/california\/story\/2025-11-03\/waymo-kills-kitkat-the-cat-and-san-francisco-mourns\">that struck<\/a> KitKat, a&nbsp; beloved neighborhood cat in San Francisco \u2014&nbsp; an outcome a human driver might have prevented. These issues are compounded by the complex world model and agent stack; its layered components make it hard to trace the source of any failures: Is it the agent&#8217;s policy, the world model&#8217;s physics or the interaction between them?<\/p><p>Many of these safety concerns manifest as technical optimization challenges similar to those the technical community has faced before, but solving them is also an ethical imperative. Robotics researchers bring years of experience navigating the so-called \u201csim-to-real\u201d gap \u2014 the challenge of translating simulated learning into physical competence. But such existing disciplines may need to adapt to the nature of world models \u2014 rather than fine-tuning the dials of hard-coded physics simulations, they must now verify the integrity of systems that have taught themselves how the world works.&nbsp; As competition intensifies, the need for careful evaluation and robustness work is likely to increase.<\/p><p>Industry deployments recognize these inherent complexities, and leading labs are grounding their world models in real-world data. This enables them to calibrate their models for the environments their physically embodied systems inhabit. Companies like 1X, for example, ground world models in video data continuously collected by their robotics fleet, optimizing for the particularities of physical homes. These environment-specific approaches that still rely on real-world data will likely precede the dream of a general agent, as interactive world models are likely to initially simulate narrow environments and tasks. However, for lighter-stakes embodiments like wearables, the push for generality may arrive sooner.<\/p><p>Beyond these characteristics, world models have distinctive features that raise new considerations. Many of these are sociotechnical \u2014 where human design choices carry ethical weight. Unlike language models, world models reason in space and time \u2014 simulating what would happen under different actions and guiding behavior accordingly.<\/p><p>Through the dynamics simulated by world models, agents may infer how materials deform under stress or how projectiles behave in the wind. While weaponized robots may seem distant, augmented reality systems that guide users through dangerous actions need not wait for breakthroughs in robotics dexterity. This raises fundamental design questions about world models that carry moral weight: What types of knowledge should we imbue in agents that may be physically embodied, and how can we design world models to prevent self-learning agents from acquiring potentially dangerous knowledge?<\/p><p>Beyond physical reasoning lies the more speculative frontier of modeling social dynamics. Human cognition evolved at least in part as a social simulator \u2014 predicting other minds was once as vital as predicting falling objects. While world models are focused on physical dynamics, nothing in principle prevents similar approaches from capturing social dynamics. To a machine learning system, a furrowed brow or a shift in posture is simply a physical pattern that precedes a specific outcome. Were such models to simulate social interactions, they could enable agents to develop intuitions about human behavior \u2014 sensing discomfort before it is voiced, reacting to micro-expressions or adjusting tone based on feedback.<\/p><p><a href=\"https:\/\/arxiv.org\/abs\/2506.22355\">Some researchers<\/a> have begun exploring adjacent territory under the label \u201cmental world models,\u201d suggesting that embodied AI could benefit from having a mental model of human relationships and user emotions. Such capabilities could make AI companions more responsive but also more persuasive \u2014 raising concerns about AI manipulation and questions about which social norms these systems might amplify.<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      &#8220;Thoughtful engagement with the world model paradigm now will shape not just how such future agents learn, but what values their actions represent and how they might interact with people.&#8221;    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/86811\"\n        data-a2a-title='\"Thoughtful engagement with the world model paradigm now will shape not just how such future agents learn, but what values their actions represent and how they might interact with people.\"'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>These implications compound at scale. Widely deploying world models shifts our focus from individual-level considerations to societal-level ones. Reliable predictive capabilities may accelerate our existing tendency to outsource decisions to machines, introducing implications for human autonomy. Useful systems embedded in wearable companions could gather unprecedented streams of spatial and behavioral data, creating significant new privacy and security considerations. The expected advancement in robotics capabilities might also impact physical labor markets.&nbsp;<\/p><p>World models suggest a future where our engagement with the world is increasingly mediated by the synthetic logic of machines. One where the map no longer just describes our world but begins to shape it.<\/p><h2 class=\"wp-block-heading\" id=\"h-building-human-worlds\">Building Human Worlds<\/h2><p>These challenges are profound, but they are not inevitable. The science of world models remains in relative infancy, with a long horizon expected before it matures into wide deployment. Thoughtful engagement with the world model paradigm now will shape not just how such future agents learn, but what values their actions represent and how they might interact with people. An overly precautionary approach risks its own moral failure. Just as the printing press democratized knowledge despite enabling propaganda, and cars transformed transportation while producing new perils, world models promise benefits that may far outweigh their risks. The question isn&#8217;t whether to build them, but how to design them to best harness their benefits.<\/p><p>This transformative potential of world models extends far beyond the joyful escapism of gaming or the convenience of laundry-folding robots. In transportation, advances in the deployment of autonomous vehicles could improve our overall safety. In medicine, world models could enable surgical robots to rehearse countless variations of a procedure before encountering a single patient, increasing precision and enhancing access to specialized care. Perhaps most fundamentally, they may help humans avoid what roboticists call the \u201cthree Ds\u201d \u2014 tasks that are dangerous, dirty or dull \u2014 relegating them to machines. And if world models deliver on their promise that simulating environments enable richer causal reasoning, they could help revolutionize scientific discovery, the domain many in the field consider the ultimate achievement of AI.<\/p><p>Realizing the promise of such world models, however, requires more than techno-optimism; it needs concrete steps to help scaffold these benefits. The embodiment safety field is already adapting crucial insights from traditional robotics simulations to its world model variants. Other useful precedents can be found in adjacent industries. The autonomous vehicles industry spent years painstakingly developing validation frameworks that verify both simulated and real-world performance. These insights can be leveraged by new industries, as world models could provide opportunities in domains where tolerance for error is narrow \u2014 surgical robotics, home assistance, industrial automation \u2014 each requiring its own careful calibration of acceptable risk. For regulators, these more mature frameworks offer a concrete starting point and an opportunity for foresight that could enable beneficial deployment.<\/p><p>World models themselves offer unique opportunities for safety research. Researchers like LeCun argue that world model architecture may be more controllable than language models \u2014 involving objective-driven agents whose goals can be specified with safety and ethics in mind. Beyond architecture, some world models may serve as digital proving grounds for testing robot behavior before physical deployment.<\/p><p>Google DeepMind <a href=\"https:\/\/arxiv.org\/abs\/2512.10675\">recently demonstrated<\/a> that its Veo video model can predict robot behavior by using its video capabilities to simulate how robots would act in real-world scenarios.&nbsp; The study showed that such simulations can help discover unsafe behaviors that would be dangerous to test on physical hardware, such as a robot inadvertently closing a laptop on a pair of scissors left on its keyboard. Beyond testing how robots act, world models themselves would need to be audited to ensure they align with the physical world. This presents a challenge that is as much ethical as it is technical: determining which world dynamics are worth modeling and defining what \u201cgood enough\u201d means.<\/p><p>Ultimately, early design decisions will dictate the societal outcomes of world model deployment. Choosing what data world models learn from is not just a technical decision, but a socio-technical one, defining the boundaries of what agents may self-learn. The behaviors and physics we accept in gaming environments differ deeply from what we may tolerate in a physical embodiment. The time to ask whether and how we would like to pursue certain capabilities, such as social world modeling, is now.<\/p><p>These deployments also raise broader governance implications. Existing privacy frameworks will likely need to be updated to account for the scale and granularity of spatial and behavioral data that world model-powered systems may harvest. Policymakers, accustomed to analyzing AI through the lens of language processing, must now grapple with systems trained to represent the dynamics of reality. Given that existing AI risk frameworks do not adequately capture the risks posed by such systems, updating these also may soon be required.<\/p><p>The walls of this digital cave are not yet set in stone. Our task is to ensure that the synthetic realities we construct are not just training grounds for efficiency, but incubators for an intelligence that accounts for the social and ethical intricacies of our reality. The design choices we make about what dynamics to simulate and what behaviors to reward will shape the AI agents that emerge in the future. By blending technical rigor with philosophical foresight, we can ensure that when these shadows are projected back into our own world, they do not darken it but illuminate it instead.<\/p>\n          <div class=\"eos-subscribe-push\">\n          \n            <a target=\"https:\/\/shop.noemamag.com\/?utm_source=BottomCTA&utm_medium=website\" href=\"https:\/\/shop.noemamag.com\/?utm_source=BottomCTA&utm_medium=website\" data-wpel-link=\"internal\">Enjoy the read? Subscribe to get the best of Noema.<\/a>\n            \n          <\/div>\n        ","protected":false},"excerpt":{"rendered":"","protected":false},"author":7189,"featured_media":86812,"template":"","wpm-article-type":[3],"wpm-article-topic":[20],"wpm-article-tag":[],"class_list":["post-86811","wpm-article","type-wpm-article","status-publish","has-post-thumbnail","hentry","wpm-article-type-essay","wpm-article-topic-technology-and-the-human"],"acf":[],"apple_news_notices":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.0 (Yoast SEO v25.0) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>When AI &amp; Human Worlds Collide<\/title>\n<meta name=\"description\" content=\"Can we imagine a future where synthetic AI worlds shape ours?\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.noemamag.com\/when-ai-human-worlds-collide\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"When AI &amp; Human Worlds Collide\" \/>\n<meta property=\"og:description\" content=\"Can we imagine a future where synthetic AI worlds shape ours?\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.noemamag.com\/when-ai-human-worlds-collide\/\" \/>\n<meta property=\"og:site_name\" content=\"NOEMA\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/NoemaMag\" \/>\n<meta property=\"article:modified_time\" content=\"2026-01-21T00:26:49+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/noemamag.imgix.net\/2026\/01\/Final-01.jpg?fm=pjpg&ixlib=php-3.3.1&s=4eb17f4196cee64c0cd8dcdeaca6454b\" \/>\n\t<meta property=\"og:image:width\" content=\"947\" \/>\n\t<meta property=\"og:image:height\" content=\"1186\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:image\" content=\"https:\/\/noemamag.imgix.net\/2026\/01\/Noema-Twitter-Card-Vertical-Template-2026-01-20T112013.359.png?fm=png&ixlib=php-3.3.1&s=7b7db6a4e501dc4bc55f889720afbdef\" \/>\n<meta name=\"twitter:site\" content=\"@NoemaMag\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"24 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.noemamag.com\/when-ai-human-worlds-collide\/\",\"url\":\"https:\/\/www.noemamag.com\/when-ai-human-worlds-collide\/\",\"name\":\"When AI & Human Worlds Collide\",\"isPartOf\":{\"@id\":\"https:\/\/www.noemamag.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.noemamag.com\/when-ai-human-worlds-collide\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.noemamag.com\/when-ai-human-worlds-collide\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/noemamag.imgix.net\/2026\/01\/Final-01.jpg?fm=pjpg&ixlib=php-3.3.1&s=4eb17f4196cee64c0cd8dcdeaca6454b\",\"datePublished\":\"2026-01-20T17:32:59+00:00\",\"dateModified\":\"2026-01-21T00:26:49+00:00\",\"description\":\"Can we imagine a future where synthetic AI worlds shape ours?\",\"breadcrumb\":{\"@id\":\"https:\/\/www.noemamag.com\/when-ai-human-worlds-collide\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.noemamag.com\/when-ai-human-worlds-collide\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.noemamag.com\/when-ai-human-worlds-collide\/#primaryimage\",\"url\":\"https:\/\/noemamag.imgix.net\/2026\/01\/Final-01.jpg?fm=pjpg&ixlib=php-3.3.1&s=4eb17f4196cee64c0cd8dcdeaca6454b\",\"contentUrl\":\"https:\/\/noemamag.imgix.net\/2026\/01\/Final-01.jpg?fm=pjpg&ixlib=php-3.3.1&s=4eb17f4196cee64c0cd8dcdeaca6454b\",\"width\":947,\"height\":1186,\"caption\":\"Setu Choudhary for Noema Magazine\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.noemamag.com\/when-ai-human-worlds-collide\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.noemamag.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"When AI &amp; Human Worlds Collide\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.noemamag.com\/#website\",\"url\":\"https:\/\/www.noemamag.com\/\",\"name\":\"NOEMA\",\"description\":\"Noema Magazine\",\"publisher\":{\"@id\":\"https:\/\/www.noemamag.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.noemamag.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.noemamag.com\/#organization\",\"name\":\"NOEMA\",\"url\":\"https:\/\/www.noemamag.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.noemamag.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/noemamag.imgix.net\/2023\/11\/noema-logo.png?fm=png&ixlib=php-3.3.1&s=5f5be9b261a7cf7e336f6f6beea6e539\",\"contentUrl\":\"https:\/\/noemamag.imgix.net\/2023\/11\/noema-logo.png?fm=png&ixlib=php-3.3.1&s=5f5be9b261a7cf7e336f6f6beea6e539\",\"width\":305,\"height\":69,\"caption\":\"NOEMA\"},\"image\":{\"@id\":\"https:\/\/www.noemamag.com\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/NoemaMag\",\"https:\/\/x.com\/NoemaMag\"]}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"When AI & Human Worlds Collide","description":"Can we imagine a future where synthetic AI worlds shape ours?","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.noemamag.com\/when-ai-human-worlds-collide\/","og_locale":"en_US","og_type":"article","og_title":"When AI &amp; Human Worlds Collide","og_description":"Can we imagine a future where synthetic AI worlds shape ours?","og_url":"https:\/\/www.noemamag.com\/when-ai-human-worlds-collide\/","og_site_name":"NOEMA","article_publisher":"https:\/\/www.facebook.com\/NoemaMag","article_modified_time":"2026-01-21T00:26:49+00:00","og_image":[{"width":947,"height":1186,"url":"https:\/\/noemamag.imgix.net\/2026\/01\/Final-01.jpg?fm=pjpg&ixlib=php-3.3.1&s=4eb17f4196cee64c0cd8dcdeaca6454b","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_image":"https:\/\/noemamag.imgix.net\/2026\/01\/Noema-Twitter-Card-Vertical-Template-2026-01-20T112013.359.png?fm=png&ixlib=php-3.3.1&s=7b7db6a4e501dc4bc55f889720afbdef","twitter_site":"@NoemaMag","twitter_misc":{"Est. reading time":"24 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.noemamag.com\/when-ai-human-worlds-collide\/","url":"https:\/\/www.noemamag.com\/when-ai-human-worlds-collide\/","name":"When AI & Human Worlds Collide","isPartOf":{"@id":"https:\/\/www.noemamag.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.noemamag.com\/when-ai-human-worlds-collide\/#primaryimage"},"image":{"@id":"https:\/\/www.noemamag.com\/when-ai-human-worlds-collide\/#primaryimage"},"thumbnailUrl":"https:\/\/noemamag.imgix.net\/2026\/01\/Final-01.jpg?fm=pjpg&ixlib=php-3.3.1&s=4eb17f4196cee64c0cd8dcdeaca6454b","datePublished":"2026-01-20T17:32:59+00:00","dateModified":"2026-01-21T00:26:49+00:00","description":"Can we imagine a future where synthetic AI worlds shape ours?","breadcrumb":{"@id":"https:\/\/www.noemamag.com\/when-ai-human-worlds-collide\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.noemamag.com\/when-ai-human-worlds-collide\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.noemamag.com\/when-ai-human-worlds-collide\/#primaryimage","url":"https:\/\/noemamag.imgix.net\/2026\/01\/Final-01.jpg?fm=pjpg&ixlib=php-3.3.1&s=4eb17f4196cee64c0cd8dcdeaca6454b","contentUrl":"https:\/\/noemamag.imgix.net\/2026\/01\/Final-01.jpg?fm=pjpg&ixlib=php-3.3.1&s=4eb17f4196cee64c0cd8dcdeaca6454b","width":947,"height":1186,"caption":"Setu Choudhary for Noema Magazine"},{"@type":"BreadcrumbList","@id":"https:\/\/www.noemamag.com\/when-ai-human-worlds-collide\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.noemamag.com\/"},{"@type":"ListItem","position":2,"name":"When AI &amp; Human Worlds Collide"}]},{"@type":"WebSite","@id":"https:\/\/www.noemamag.com\/#website","url":"https:\/\/www.noemamag.com\/","name":"NOEMA","description":"Noema Magazine","publisher":{"@id":"https:\/\/www.noemamag.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.noemamag.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.noemamag.com\/#organization","name":"NOEMA","url":"https:\/\/www.noemamag.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.noemamag.com\/#\/schema\/logo\/image\/","url":"https:\/\/noemamag.imgix.net\/2023\/11\/noema-logo.png?fm=png&ixlib=php-3.3.1&s=5f5be9b261a7cf7e336f6f6beea6e539","contentUrl":"https:\/\/noemamag.imgix.net\/2023\/11\/noema-logo.png?fm=png&ixlib=php-3.3.1&s=5f5be9b261a7cf7e336f6f6beea6e539","width":305,"height":69,"caption":"NOEMA"},"image":{"@id":"https:\/\/www.noemamag.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/NoemaMag","https:\/\/x.com\/NoemaMag"]}]}},"parsely":{"version":"1.1.0","canonical_url":"https:\/\/noemamag.com\/when-ai-human-worlds-collide","smart_links":{"inbound":0,"outbound":0},"traffic_boost_suggestions_count":0,"meta":{"@context":"https:\/\/schema.org","@type":"NewsArticle","headline":"When AI &amp; Human Worlds Collide","url":"http:\/\/www.noemamag.com\/when-ai-human-worlds-collide","mainEntityOfPage":{"@type":"WebPage","@id":"http:\/\/www.noemamag.com\/when-ai-human-worlds-collide"},"thumbnailUrl":"https:\/\/noemamag.imgix.net\/2026\/01\/Final-01.jpg?fit=crop&fm=pjpg&h=150&ixlib=php-3.3.1&w=150&wpsize=thumbnail&s=c8258c26c9f5192abccdb1d0c5b331ef","image":{"@type":"ImageObject","url":"https:\/\/noemamag.imgix.net\/2026\/01\/Final-01.jpg?fm=pjpg&ixlib=php-3.3.1&s=4eb17f4196cee64c0cd8dcdeaca6454b"},"articleSection":"Uncategorized","author":[{"@type":"Person","name":"Ben Bariach"}],"creator":["Ben Bariach"],"publisher":{"@type":"Organization","name":"NOEMA","logo":"https:\/\/www.noemamag.com\/wp-content\/uploads\/2020\/06\/cropped-ms-icon-310x310-1.png"},"keywords":[],"dateCreated":"2026-01-20T17:32:59Z","datePublished":"2026-01-20T17:32:59Z","dateModified":"2026-01-21T00:26:49Z"},"rendered":"<script type=\"application\/ld+json\" class=\"wp-parsely-metadata\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@type\":\"NewsArticle\",\"headline\":\"When AI &amp; Human Worlds Collide\",\"url\":\"http:\\\/\\\/www.noemamag.com\\\/when-ai-human-worlds-collide\",\"mainEntityOfPage\":{\"@type\":\"WebPage\",\"@id\":\"http:\\\/\\\/www.noemamag.com\\\/when-ai-human-worlds-collide\"},\"thumbnailUrl\":\"https:\\\/\\\/noemamag.imgix.net\\\/2026\\\/01\\\/Final-01.jpg?fit=crop&fm=pjpg&h=150&ixlib=php-3.3.1&w=150&wpsize=thumbnail&s=c8258c26c9f5192abccdb1d0c5b331ef\",\"image\":{\"@type\":\"ImageObject\",\"url\":\"https:\\\/\\\/noemamag.imgix.net\\\/2026\\\/01\\\/Final-01.jpg?fm=pjpg&ixlib=php-3.3.1&s=4eb17f4196cee64c0cd8dcdeaca6454b\"},\"articleSection\":\"Uncategorized\",\"author\":[{\"@type\":\"Person\",\"name\":\"Ben Bariach\"}],\"creator\":[\"Ben Bariach\"],\"publisher\":{\"@type\":\"Organization\",\"name\":\"NOEMA\",\"logo\":\"https:\\\/\\\/www.noemamag.com\\\/wp-content\\\/uploads\\\/2020\\\/06\\\/cropped-ms-icon-310x310-1.png\"},\"keywords\":[],\"dateCreated\":\"2026-01-20T17:32:59Z\",\"datePublished\":\"2026-01-20T17:32:59Z\",\"dateModified\":\"2026-01-21T00:26:49Z\"}<\/script>","tracker_url":"https:\/\/cdn.parsely.com\/keys\/noemamag.com\/p.js"},"_links":{"self":[{"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/86811","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article"}],"about":[{"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/types\/wpm-article"}],"author":[{"embeddable":true,"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/users\/7189"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/media\/86812"}],"wp:attachment":[{"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/media?parent=86811"}],"wp:term":[{"taxonomy":"wpm-article-type","embeddable":true,"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article-type?post=86811"},{"taxonomy":"wpm-article-topic","embeddable":true,"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article-topic?post=86811"},{"taxonomy":"wpm-article-tag","embeddable":true,"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article-tag?post=86811"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}