{"id":33409,"date":"2022-08-23T14:17:57","date_gmt":"2022-08-23T14:17:57","guid":{"rendered":"https:\/\/www.noemamag.com"},"modified":"2022-12-10T00:31:53","modified_gmt":"2022-12-10T00:31:53","slug":"ai-and-the-limits-of-language","status":"publish","type":"wpm-article","link":"https:\/\/www.noemamag.com\/ai-and-the-limits-of-language","title":{"rendered":"AI And The Limits Of Language"},"content":{"rendered":"<div class=\"bio-block bio-block--default\" role=\"group\">\n    <div class=\"title toggle\">Credits<\/div>\n    <div class=\"content\">\n        <p>Jacob Browning is a postdoc in NYU\u2019s Computer Science Department working on the philosophy of AI.<\/p>\n<p style=\"font-weight: 400;\">Yann LeCun is a Turing Award-winning machine learning researcher, an NYU professor and the chief AI scientist at Meta.<\/p>\n    <\/div>\n<\/div>\n<p>When a Google engineer recently declared Google\u2019s AI chatbot a person, pandemonium ensued. The chatbot, LaMDA, is a large language model (LLM) that is designed to predict the likely next words to whatever lines of text it is given. Since many conversations are somewhat predictable, these systems can infer how to keep a conversation going productively. LaMDA did this so impressively that the engineer, Blake Lemoine, began to <a href=\"https:\/\/www.businessinsider.com\/suspended-google-engineer-says-sentient-ai-hired-lawyer-2022-6\">wonder<\/a> about whether there was a ghost in the machine.<\/p><p>Reactions to Lemoine\u2019s story spanned the gamut: some people scoffed at the mere idea that a machine could ever be a person. Others suggested that <em>this <\/em>LLM isn\u2019t a person, but the next <a href=\"https:\/\/medium.com\/@blaisea\/do-large-language-models-understand-us-6f881d6d8e75\">perhaps might be<\/a>. Still others pointed out that <a href=\"https:\/\/theconversation.com\/googles-powerful-ai-spotlights-a-human-cognitive-glitch-mistaking-fluent-speech-for-fluent-thought-185099\">deceiving humans<\/a> isn\u2019t very challenging; we see saints in toast, after all.<\/p><p>But the diversity of responses highlights a deeper problem: as these LLMs become more common and powerful, there seems to be less and less agreement over how we should understand them. These systems have bested many \u201ccommon sense\u201d linguistic reasoning benchmarks over the years, many which promised to be conquerable only by a <a href=\"http:\/\/commonsensereasoning.org\/2011\/papers\/Levesque.pdf\">machine<\/a> that \u201cis thinking in the full-bodied sense we usually reserve for people.\u201d Yet these systems rarely seem to have the common sense promised when they defeat the test and are usually still prone to blatant nonsense, non sequiturs and <a href=\"https:\/\/www.artificialintelligence-news.com\/2020\/10\/28\/medical-chatbot-openai-gpt3-patient-kill-themselves\/\">dangerous advice<\/a>. This leads to a troubling question: how can these systems be so smart, yet also seem so limited?<\/p><p>The underlying problem isn\u2019t the AI. The problem is the limited nature of <em>language<\/em>. Once we abandon old assumptions about the connection between thought and language, it is clear that these systems are doomed to a shallow understanding that will never approximate the full-bodied thinking we see in humans. In short, despite being among the most impressive AI systems on the planet, these AI systems will never be much like us.<\/p><h5 class=\"wp-block-heading\" id=\"h-saying-it-all\"><strong>Saying It All<\/strong><\/h5><p>A dominant theme for much of the 19<sup>th<\/sup> and 20<sup>th<\/sup> century in <a href=\"https:\/\/press.princeton.edu\/books\/paperback\/9781890951795\/objectivity\">philosophy and science<\/a> was that knowledge <em>just is <\/em>linguistic \u2014 that knowing something simply means thinking the right sentence and grasping how it connects to other sentences in a big web of all the true claims we know. The ideal form of language, by this logic, would be a purely formal, logical-mathematical one composed of <a href=\"http:\/\/www.thatmarcusfamily.org\/philosophy\/Course_Websites\/Readings\/Frege%20-%20The%20Thought%20a%20Logical%20Inquiry.pdf\">arbitrary symbols<\/a> connected by strict rules of inference, but natural language could serve as well if you took the extra effort to clear up ambiguities and imprecisions. As Wittgenstein put it, \u201cThe totality of true propositions is the whole of natural science.\u201d This position was so established in the 20<sup>th<\/sup> century that psychological findings of cognitive maps and <a href=\"https:\/\/mitpress.mit.edu\/books\/imagery-debate\">mental images<\/a> were controversial, with many arguing that, despite appearances, these <em>must <\/em>be linguistic at base.<\/p><p>This view is still assumed by some overeducated, intellectual types: everything which can be known can be contained in an encyclopedia, so just reading everything might give us a comprehensive knowledge of everything. It also motivated a lot of the early work in Symbolic AI, where <a href=\"https:\/\/nautil.us\/deep-learning-is-hitting-a-wall-14467\/\">symbol manipulation<\/a> \u2014 arbitrary symbols being bound together in different ways according to logical rules \u2014 was the default paradigm. For these researchers, an AI\u2019s knowledge consisted of a massive database of true sentences logically connected with one another by hand, and an AI system counted as intelligent if it spit out the right sentence at the right time \u2014 that is, if it manipulated symbols in the appropriate way. This notion is what underlies the Turing test: if a machine says everything it\u2019s supposed to say, that means it knows what it\u2019s talking about, since knowing the right sentences and when to deploy them <em>exhausts <\/em>knowledge.<\/p><hr class=\"wp-block-separator has-css-opacity is-style-dots\"\/><h2 class=\"has-text-align-center wp-block-heading\" id=\"h-related-articles\">Related Articles<\/h2><p class=\"has-text-align-center\"><a href=\"https:\/\/www.noemamag.com\/deep-learning-alone-isnt-getting-us-to-human-like-ai\/\">Deep Learning Alone Isn\u2019t Getting Us To Human-Like AI<\/a><\/p><p class=\"has-text-align-center\"><a href=\"https:\/\/www.noemamag.com\/what-ai-can-tell-us-about-intelligence\/\">What AI Can Tell Us About Intelligence<\/a><\/p><p class=\"has-text-align-center\"><a href=\"https:\/\/www.noemamag.com\/the-model-is-the-message\/\">The Model Is The Message<\/a><\/p><hr class=\"wp-block-separator has-css-opacity is-style-dots\"\/><p>But this was subject to a <a href=\"https:\/\/plato.stanford.edu\/entries\/chinese-room\/\">withering critique<\/a> which has dogged it ever since: just because a machine can talk about anything, that doesn\u2019t mean it understands what it is talking about. This is because language doesn\u2019t exhaust knowledge; on the contrary, it is only a highly specific, and deeply limited, kind of knowledge representation. All language \u2014 whether a programming language, a symbolic logic or a spoken language \u2014 turns on a specific type of representational schema; it excels at expressing discrete objects and properties and the relationships between them at an extremely high level of abstraction. But there is a massive difference between reading a musical score and listening to a recording of the music, and a further difference from having the skill to play it.<\/p><p>All representational schemas involve a compression of information about something, but what gets left in and left out in the compression varies. The representational schema of language struggles with more concrete information, such as describing irregular shapes, the motion of objects, the functioning of a complex mechanism or the nuanced brushwork of a painting \u2014 much less the finicky, context-specific movements needed for surfing a wave. But there are nonlinguistic representational schemes which can express this information in an accessible way: iconic knowledge, which involves things like images, recordings, graphs and maps; and the distributed knowledge found in trained neural networks \u2014 what we often call know-how and muscle memory. Each scheme expresses some information easily even while finding other information hard \u2014 or even impossible \u2014 to represent: what does \u201cEither Picasso or Twombly\u201d look like?<\/p><h5 class=\"wp-block-heading\"><strong>The Limits Of Language<\/strong><\/h5><p>One way of grasping what is distinctive about the linguistic representational schema \u2014 and how it is limited \u2014 is recognizing how littleinformation it passes along on its own. Language is a very <em>low-bandwidth <\/em>method for transmitting information: isolated words or sentences, shorn of context, convey little. Moreover, because of the sheer number of homonyms and pronouns, many sentences are deeply ambiguous: does \u201c<a href=\"https:\/\/aclanthology.org\/www.mt-archive.info\/Bar-Hillel-1959-App4.pdf\">the box was in the pen<\/a>\u201d refer to an ink pen or a playpen? As Chomsky and his acolytes have <a href=\"https:\/\/www.frontiersin.org\/articles\/10.3389\/fpsyg.2015.01434\/full\">pointed out<\/a> for decades, language is just not a clear and unambiguous vehicle for clear communication.<\/p><p>But humans don\u2019t <em>need <\/em>a <a href=\"https:\/\/www.basicbooks.com\/titles\/morten-h-christiansen\/the-language-game\/9781541674981\/\">perfect vehicle<\/a> for communication because we share a nonlinguistic understanding. Our understanding of a sentence often depends on our deeper understanding of the contexts in which this kind of sentence shows up, allowing us to infer what it is trying to say. This is obvious in conversation, since we are often talking about something directly in front of us, such as a football game, or communicating about some clear objective given the social roles at play in a situation, such as ordering food from a waiter. But the same holds in reading passages \u2014 a lesson which not only undermines common-sense language tests in AI but also a <a href=\"https:\/\/www.forbes.com\/sites\/nataliewexler\/2019\/01\/23\/why-were-teaching-reading-comprehension-in-a-way-that-doesnt-work\/?sh=7a0c623c37e0\">popular method<\/a> of teaching context-free reading comprehension skills to children. This method focuses on using generalized reading comprehension strategies to understand a text \u2014 but research suggests that the amount of <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0022537172800069?via%3Dihub\">background knowledge<\/a> a child has on the topic is actually the key factor for comprehension. Understanding a sentence or passage depends on an underlying grasp of what the topic is about.\n          <div class=\"eos-subscribe-push\">\n            \n            <a target=\"https:\/\/shop.noemamag.com\/?utm_source=MiddleCTA&utm_medium=website\" href=\"https:\/\/shop.noemamag.com\/?utm_source=MiddleCTA&utm_medium=website\" data-wpel-link=\"internal\">Read Noema in print.<\/a>\n            \n          <\/div>\n        <\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      \u201cIt is clear that these systems are doomed to a shallow understanding that will never approximate the full-bodied thinking we see in humans.\u201d    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/33409\"\n        data-a2a-title='\u201cIt is clear that these systems are doomed to a shallow understanding that will never approximate the full-bodied thinking we see in humans.\u201d'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>The inherently contextual nature of words and sentences is at the heart of how LLMs work. Neural nets in general represent knowledge as <a href=\"https:\/\/www.noemamag.com\/making-common-sense\/\"><em>know-how<\/em><\/a><em>, <\/em>the skillful ability to grasp highly context-sensitive patterns and find regularities \u2014 both concrete and abstract \u2014 necessary for handling inputs in nuanced ways that are narrowly tailored to their task. In LLMs, this involves the system discerning patterns at multiple levels in existing texts, seeing both how individual words are connected in the passage but also how the sentences all hang together within the larger passage which frames them. The result is that its grasp of language is ineliminably contextual; every word is understood not on its dictionary meaning but in terms of the role it plays in a diverse collection of sentences. Since many words \u2014 think \u201ccarburetor,\u201d \u201cmenu,\u201d \u201cdebugging\u201d or \u201celectron\u201d \u2014 are almost exclusively used in specific fields, even an isolated sentence with one of these words carries its context on its sleeve.<\/p><p>In short, LLMs are trained to pick up on the background knowledge for each sentence, looking to the surrounding words and sentences to piece together what is going on. This allows them to take an infinite possibility of different sentences or phrases as input and come up with plausible (though hardly flawless) ways to continue the conversation or fill in the rest of the passage. A system trained on passages written by humans, often conversing with each other, should come up with the general understanding necessary for compelling conversation.<\/p><h5 class=\"wp-block-heading\"><strong>Shallow Understanding<\/strong><\/h5><p>While some balk at using the term \u201cunderstanding\u201d in this context or calling LLMs \u201cintelligent,\u201d it isn\u2019t clear what <a href=\"https:\/\/www.noemamag.com\/the-model-is-the-message\/\">semantic gatekeeping<\/a> is buying anyone these days. But critics are right to accuse these systems of being engaged in <a href=\"https:\/\/nautil.us\/moving-beyond-mimicry-in-artificial-intelligence-21015\/\">a kind of mimicry<\/a>. This is because LLMs\u2019 understanding of language, while impressive, is <em>shallow. <\/em>This kind of shallow understanding is familiar; classrooms are filled with <a href=\"https:\/\/www.youtube.com\/watch?v=nx857dcV6mc\">jargon-spouting students<\/a> who don\u2019t know what they\u2019re talking about \u2014 effectively engaged in a mimicry of their professors or the texts they are reading. This is just part of life; we often <a href=\"https:\/\/www.britannica.com\/science\/Dunning-Kruger-effect\">don\u2019t know<\/a> how little we know, especially when it comes to knowledge acquired from language.<\/p><p>LLMs have acquired this kind of shallow understanding about everything. A system like GPT-3 is trained by masking the future words in a sentence or passage and forcing the machine to guess what word is most likely, then being corrected for bad guesses. The system eventually gets proficient at guessing the most likely words, making them an effective predictive system.<\/p><p>This brings with it some genuine understanding: for any question or puzzle, there are usually only a few right answers but an infinite number of wrong answers. This forces the system to learn <a href=\"https:\/\/ai.googleblog.com\/2022\/04\/pathways-language-model-palm-scaling-to.html\">language-specific skills<\/a>, such as explaining a joke, solving a word problem or figuring out a logic puzzle, in order to regularly predict the right answer on these types of questions. These skills, and the connected knowledge, allow the machine to explain how something complicated works, simplify difficult concepts, rephrase and retell stories, along with a host of other language-dependent abilities. Instead of a massive database of sentences linked by logical rules, as Symbolic AI assumed, the knowledge is represented as context-sensitive know-how for coming up with a plausible sentence given the prior line.<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      \u201cAbandoning the view that all knowledge is linguistic permits us to realize how much of our knowledge is nonlinguistic.\u201d    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/33409\"\n        data-a2a-title='\u201cAbandoning the view that all knowledge is linguistic permits us to realize how much of our knowledge is nonlinguistic.\u201d'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>But the ability to <em>explain <\/em>a concept linguistically is different from the ability to <em>use <\/em>it practically. The system can explain how to perform long division without being able to perform it or explain what words are offensive and should not be said while then blithely going on to say them. The contextual knowledge is embedded in one form \u2014 the capacity to rattle off linguistic knowledge \u2014 but is not embedded in another form \u2014 as skillful know-how for how to do things like being empathetic or handling a difficult issue sensitively.<\/p><p>The latter kind of know-how is essential to language <em>users, <\/em>but that doesn\u2019t make them linguistic skills \u2014 the linguistic component is incidental, not the main thing. This applies to many concepts, even those learned from lectures and books: while science classes do have a lecture component, students are graded primarily based on their lab work. Outside the humanities especially, being able to talk about something is often less useful or important than the nitty-gritty skills needed to get things to work right.<\/p><p>Once we scratch beneath the surface, it is easier to see how limited these systems really are: they have the attention span and memory of roughly a paragraph. This can easily be missed if we engage in a conversation because we tend to focus on just the last comment or two and focus only on our next response.<\/p><p>But the know-how for more complex conversations \u2014 active listening, recall and revisiting prior comments, sticking to a topic to make a specific point while fending off distractors, and so on \u2014 all require more attention and memory than the system possesses. This reduces even further what kind of understanding is available to them: it is easy to trick them simply by being inconsistent every few minutes, changing languages or gaslighting the system. If it is too many steps back, the system will just start over, accepting your new views as consistent with older comments, switching languages with you or acknowledging it believes whatever you said. The understanding necessary for developing a coherent view of the world is far beyond their ken. &nbsp;<\/p><h5 class=\"wp-block-heading\"><strong>Beyond Language<\/strong><\/h5><p>Abandoning the view that all knowledge is linguistic permits us to realize how much of our knowledge is nonlinguistic. While books contain a lot of information we can decompress and use, so do many other objects: IKEA instructions don\u2019t even bother writing out instructions alongside its drawings; AI researchers often look at the diagrams in a paper first, grasp the network architecture and only then glance through the text; visitors can navigate NYC by following the red or green lines on a map.&nbsp;<\/p><p>This goes beyond simple icons, graphs and maps. Humans learn a lot directly from exploring the world, which shows us how objects and people can and cannot behave. The structures of artifacts and the human environment convey a lot of information intuitively: doorknobs are at hand height, hammers have soft grips and so on. Nonlinguistic mental simulation, in <a href=\"https:\/\/www.cell.com\/current-biology\/pdf\/S0960-9822(07)01250-X.pdf\">animals and humans<\/a>, is common and useful for planning out scenarios and can be used to craft, or reverse-engineer, artifacts. Similarly, social customs and rituals can <a href=\"https:\/\/nationalhumanitiescenter.org\/on-the-human\/2010\/08\/the-evolved-apprentice\/\">convey<\/a> all kinds of skills to the next generation through imitation, extending from preparing foods and medicines to maintaining the peace at times of tension. Much of our cultural knowledge is iconic or in the form of precise movements passed on from skilled practitioner to apprentice. These nuanced <a href=\"https:\/\/navigatingthezhuangzi.weebly.com\/cook-ding-cuts-up-an-ox.html?c=mkt_w_chnl:aff_geo:all_prtnr:sas_subprtnr:1538097_camp:brand_adtype:txtlnk_ag:weebly_lptype:hp_var:358504&amp;sscid=81k6_ic4oc\">patterns of information<\/a> are hard to express and convey in language but are still accessible to others. This is also the precise kind of context-sensitive information that neural networks excel at picking up and perfecting.<\/p><!-- Quote Block Template -->\n\n<figure class=\"quote\">\n\n  <blockquote class=\"quote__container\">\n\n    <div class=\"quote__text\">\n      \u201cA system trained on language alone will never approximate human intelligence, even if trained from now until the heat death of the universe.\u201d    <\/div>\n\n    \n    <div class=\"quote__social-media\">\n      <div\n        class=\"a2a_kit a2a_kit_size_35 a2a_default_style\"\n        data-a2a-url=\"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/33409\"\n        data-a2a-title='\u201cA system trained on language alone will never approximate human intelligence, even if trained from now until the heat death of the universe.\u201d'\n      >\n        <a class=\"a2a_button_facebook\"><\/a>\n        <a class=\"a2a_button_twitter\"><\/a>\n        <a class=\"a2a_button_email\"><\/a>\n      <\/div>\n    <\/div>\n  <\/blockquote>\n<\/figure><p>Language is important because it can convey a lot of information in a small format and, especially after the creation of the printing press and the internet, can involve reproducing and making it available widely. But compressing information in language isn\u2019t cost-free: it takes a <em>lot <\/em>of effort to <a href=\"https:\/\/www.reddit.com\/r\/explainlikeimfive\/comments\/5pcowj\/eli5_kant_if_you_measure_the_length_of_a_book_not\/\">decode<\/a> a dense passage. Humanities classes may require a lot of reading out of class, but a good chunk of class time is still spent going over difficult passages. Building a deep understanding is time-consuming and exhaustive, however the information is provided.<\/p><p>This explains why a machine trained on language can know so much and yet so little. It is acquiring a small part of human knowledge through a tiny bottleneck. But that small part of human knowledge can be about <em>anything<\/em>, whether it be love or astrophysics. It is thus a bit akin to a mirror: it gives the illusion of depth and can reflect almost anything, but it is only a centimeter thick. If we try to explore its depths, we bump our heads.<\/p><h5 class=\"wp-block-heading\"><strong>Exorcising The Ghost<\/strong><\/h5><p>This doesn\u2019t make these machines stupid, but it also suggests there are intrinsic limits concerning how smart they can be. A system trained on language alone will never approximate human intelligence, even if trained from now until the heat death of the universe. This is just the wrong kind of knowledge for developing awareness or being a person. But they will undoubtedly <a href=\"https:\/\/nautil.us\/moving-beyond-mimicry-in-artificial-intelligence-21015\/\"><em>seem <\/em>to approximate it<\/a> if we stick to the surface. And, in many cases, the surface is enough; few of us really apply the Turing test to other people, aggressively querying the depth of their understanding and forcing them to do multidigit multiplication problems. Most talk is small talk.<\/p><p>But we should not confuse the shallow understanding LLMs possess for the deep understanding humans acquire from watching the spectacle of the world, exploring it, experimenting in it and interacting with culture and other people. Language may be a helpful component which extends our understanding of the world, but language doesn\u2019t exhaust intelligence, as is evident <a href=\"https:\/\/www.nytimes.com\/2016\/04\/10\/opinion\/sunday\/what-i-learned-from-tickling-apes.html\">from many species<\/a>, such as corvids, octopi and primates.<\/p><p>Rather, the deep nonlinguistic understanding is the ground that makes language useful; it\u2019s because we possess a deep understanding of the world that we can quickly understand what other people are talking about. This broader, context-sensitive kind of <a href=\"https:\/\/aclanthology.org\/2020.emnlp-main.703\/\">learning and know-how<\/a> is the more basic and ancient kind of knowledge, one which underlies the emergence of sentience in embodied critters and makes it possible to survive and flourish. It is also the more essential task that AI researchers are focusing on when searching for <a href=\"https:\/\/www.repository.cam.ac.uk\/bitstream\/handle\/1810\/321696\/Artificial%20intelligence%20and%20the%20common%20sense%20of%20aniamls%2C%20Shanahan%20et%20al.%2C%202020.pdf?sequence=1\">common sense in AI<\/a>, rather than this linguistic stuff. LLMs have no stable body or abiding world to be sentient <em>of<\/em>\u2014so their knowledge begins and ends with more words and their common-sense is always skin-deep. The goal is for AI systems to focus on <a href=\"https:\/\/aclanthology.org\/2020.acl-main.463\/\">the world<\/a> being talked about, not the words themselves \u2014 but LLMs <a href=\"https:\/\/blogs.scientificamerican.com\/observations\/whats-still-lacking-in-artificial-intelligence\/\">don\u2019t grasp the distinction<\/a>. There is no way to approximate this deep understanding solely through language; it\u2019s just the wrong kind of thing. Dealing with LLMs at any length makes apparent just how little can be known from language alone.<\/p>\n          <div class=\"eos-subscribe-push\">\n          \n            <a target=\"https:\/\/shop.noemamag.com\/?utm_source=BottomCTA&utm_medium=website\" href=\"https:\/\/shop.noemamag.com\/?utm_source=BottomCTA&utm_medium=website\" data-wpel-link=\"internal\">Enjoy the read? Subscribe to get the best of Noema.<\/a>\n            \n          <\/div>\n        ","protected":false},"excerpt":{"rendered":"","protected":false},"author":1494,"featured_media":38645,"template":"","wpm-article-type":[3],"wpm-article-topic":[21,23,20],"wpm-article-tag":[],"class_list":["post-33409","wpm-article","type-wpm-article","status-publish","has-post-thumbnail","hentry","wpm-article-type-essay","wpm-article-topic-digital-society","wpm-article-topic-philosophy-culture","wpm-article-topic-technology-and-the-human"],"acf":[],"apple_news_notices":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v25.0 (Yoast SEO v25.0) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>AI And The Limits Of Language<\/title>\n<meta name=\"description\" content=\"An artificial intelligence system trained on words and sentences alone will never approximate human understanding.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.noemamag.com\/ai-and-the-limits-of-language\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"AI And The Limits Of Language\" \/>\n<meta property=\"og:description\" content=\"An artificial intelligence system trained on words and sentences alone will never approximate human understanding.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.noemamag.com\/ai-and-the-limits-of-language\/\" \/>\n<meta property=\"og:site_name\" content=\"NOEMA\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/NoemaMag\" \/>\n<meta property=\"article:modified_time\" content=\"2022-12-10T00:31:53+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/noemamag.imgix.net\/2022\/12\/language.jpg?fm=pjpg&ixlib=php-3.3.0&s=9cbba70162d750d8f131e62dc21e18ad\" \/>\n\t<meta property=\"og:image:width\" content=\"947\" \/>\n\t<meta property=\"og:image:height\" content=\"1181\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:site\" content=\"@NoemaMag\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"13 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.noemamag.com\/ai-and-the-limits-of-language\/\",\"url\":\"https:\/\/www.noemamag.com\/ai-and-the-limits-of-language\/\",\"name\":\"AI And The Limits Of Language\",\"isPartOf\":{\"@id\":\"https:\/\/www.noemamag.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.noemamag.com\/ai-and-the-limits-of-language\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.noemamag.com\/ai-and-the-limits-of-language\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/noemamag.imgix.net\/2022\/12\/language.jpg?fm=pjpg&ixlib=php-3.3.1&s=0c2fd9b37a429c9d10f61a3c1c31cf7f\",\"datePublished\":\"2022-08-23T14:17:57+00:00\",\"dateModified\":\"2022-12-10T00:31:53+00:00\",\"description\":\"An artificial intelligence system trained on words and sentences alone will never approximate human understanding.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.noemamag.com\/ai-and-the-limits-of-language\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.noemamag.com\/ai-and-the-limits-of-language\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.noemamag.com\/ai-and-the-limits-of-language\/#primaryimage\",\"url\":\"https:\/\/noemamag.imgix.net\/2022\/12\/language.jpg?fm=pjpg&ixlib=php-3.3.1&s=0c2fd9b37a429c9d10f61a3c1c31cf7f\",\"contentUrl\":\"https:\/\/noemamag.imgix.net\/2022\/12\/language.jpg?fm=pjpg&ixlib=php-3.3.1&s=0c2fd9b37a429c9d10f61a3c1c31cf7f\",\"width\":947,\"height\":1181,\"caption\":\"Merijn Hos for Noema Magazine\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.noemamag.com\/ai-and-the-limits-of-language\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.noemamag.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"AI And The Limits Of Language\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.noemamag.com\/#website\",\"url\":\"https:\/\/www.noemamag.com\/\",\"name\":\"NOEMA\",\"description\":\"Noema Magazine\",\"publisher\":{\"@id\":\"https:\/\/www.noemamag.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.noemamag.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.noemamag.com\/#organization\",\"name\":\"NOEMA\",\"url\":\"https:\/\/www.noemamag.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.noemamag.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/noemamag.imgix.net\/2023\/11\/noema-logo.png?fm=png&ixlib=php-3.3.1&s=5f5be9b261a7cf7e336f6f6beea6e539\",\"contentUrl\":\"https:\/\/noemamag.imgix.net\/2023\/11\/noema-logo.png?fm=png&ixlib=php-3.3.1&s=5f5be9b261a7cf7e336f6f6beea6e539\",\"width\":305,\"height\":69,\"caption\":\"NOEMA\"},\"image\":{\"@id\":\"https:\/\/www.noemamag.com\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/NoemaMag\",\"https:\/\/x.com\/NoemaMag\"]}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"AI And The Limits Of Language","description":"An artificial intelligence system trained on words and sentences alone will never approximate human understanding.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.noemamag.com\/ai-and-the-limits-of-language\/","og_locale":"en_US","og_type":"article","og_title":"AI And The Limits Of Language","og_description":"An artificial intelligence system trained on words and sentences alone will never approximate human understanding.","og_url":"https:\/\/www.noemamag.com\/ai-and-the-limits-of-language\/","og_site_name":"NOEMA","article_publisher":"https:\/\/www.facebook.com\/NoemaMag","article_modified_time":"2022-12-10T00:31:53+00:00","og_image":[{"width":947,"height":1181,"url":"https:\/\/noemamag.imgix.net\/2022\/12\/language.jpg?fm=pjpg&ixlib=php-3.3.0&s=9cbba70162d750d8f131e62dc21e18ad","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_site":"@NoemaMag","twitter_misc":{"Est. reading time":"13 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.noemamag.com\/ai-and-the-limits-of-language\/","url":"https:\/\/www.noemamag.com\/ai-and-the-limits-of-language\/","name":"AI And The Limits Of Language","isPartOf":{"@id":"https:\/\/www.noemamag.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.noemamag.com\/ai-and-the-limits-of-language\/#primaryimage"},"image":{"@id":"https:\/\/www.noemamag.com\/ai-and-the-limits-of-language\/#primaryimage"},"thumbnailUrl":"https:\/\/noemamag.imgix.net\/2022\/12\/language.jpg?fm=pjpg&ixlib=php-3.3.1&s=0c2fd9b37a429c9d10f61a3c1c31cf7f","datePublished":"2022-08-23T14:17:57+00:00","dateModified":"2022-12-10T00:31:53+00:00","description":"An artificial intelligence system trained on words and sentences alone will never approximate human understanding.","breadcrumb":{"@id":"https:\/\/www.noemamag.com\/ai-and-the-limits-of-language\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.noemamag.com\/ai-and-the-limits-of-language\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.noemamag.com\/ai-and-the-limits-of-language\/#primaryimage","url":"https:\/\/noemamag.imgix.net\/2022\/12\/language.jpg?fm=pjpg&ixlib=php-3.3.1&s=0c2fd9b37a429c9d10f61a3c1c31cf7f","contentUrl":"https:\/\/noemamag.imgix.net\/2022\/12\/language.jpg?fm=pjpg&ixlib=php-3.3.1&s=0c2fd9b37a429c9d10f61a3c1c31cf7f","width":947,"height":1181,"caption":"Merijn Hos for Noema Magazine"},{"@type":"BreadcrumbList","@id":"https:\/\/www.noemamag.com\/ai-and-the-limits-of-language\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.noemamag.com\/"},{"@type":"ListItem","position":2,"name":"AI And The Limits Of Language"}]},{"@type":"WebSite","@id":"https:\/\/www.noemamag.com\/#website","url":"https:\/\/www.noemamag.com\/","name":"NOEMA","description":"Noema Magazine","publisher":{"@id":"https:\/\/www.noemamag.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.noemamag.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.noemamag.com\/#organization","name":"NOEMA","url":"https:\/\/www.noemamag.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.noemamag.com\/#\/schema\/logo\/image\/","url":"https:\/\/noemamag.imgix.net\/2023\/11\/noema-logo.png?fm=png&ixlib=php-3.3.1&s=5f5be9b261a7cf7e336f6f6beea6e539","contentUrl":"https:\/\/noemamag.imgix.net\/2023\/11\/noema-logo.png?fm=png&ixlib=php-3.3.1&s=5f5be9b261a7cf7e336f6f6beea6e539","width":305,"height":69,"caption":"NOEMA"},"image":{"@id":"https:\/\/www.noemamag.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/NoemaMag","https:\/\/x.com\/NoemaMag"]}]}},"parsely":{"version":"1.1.0","canonical_url":"https:\/\/noemamag.com\/ai-and-the-limits-of-language","smart_links":{"inbound":0,"outbound":0},"traffic_boost_suggestions_count":0,"meta":{"@context":"https:\/\/schema.org","@type":"NewsArticle","headline":"AI And The Limits Of Language","url":"http:\/\/www.noemamag.com\/ai-and-the-limits-of-language","mainEntityOfPage":{"@type":"WebPage","@id":"http:\/\/www.noemamag.com\/ai-and-the-limits-of-language"},"thumbnailUrl":"https:\/\/noemamag.imgix.net\/2022\/12\/language.jpg?fit=crop&fm=pjpg&h=150&ixlib=php-3.3.1&w=150&wpsize=thumbnail&s=72a2514ff3371a8a8028c38f96c57ee8","image":{"@type":"ImageObject","url":"https:\/\/noemamag.imgix.net\/2022\/12\/language.jpg?fm=pjpg&ixlib=php-3.3.1&s=0c2fd9b37a429c9d10f61a3c1c31cf7f"},"articleSection":"Uncategorized","author":[{"@type":"Person","name":"Jacob Browning"}],"creator":["Jacob Browning"],"publisher":{"@type":"Organization","name":"NOEMA","logo":"https:\/\/www.noemamag.com\/wp-content\/uploads\/2020\/06\/cropped-ms-icon-310x310-1.png"},"keywords":[],"dateCreated":"2022-08-23T14:17:57Z","datePublished":"2022-08-23T14:17:57Z","dateModified":"2022-12-10T00:31:53Z"},"rendered":"<script type=\"application\/ld+json\" class=\"wp-parsely-metadata\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@type\":\"NewsArticle\",\"headline\":\"AI And The Limits Of Language\",\"url\":\"http:\\\/\\\/www.noemamag.com\\\/ai-and-the-limits-of-language\",\"mainEntityOfPage\":{\"@type\":\"WebPage\",\"@id\":\"http:\\\/\\\/www.noemamag.com\\\/ai-and-the-limits-of-language\"},\"thumbnailUrl\":\"https:\\\/\\\/noemamag.imgix.net\\\/2022\\\/12\\\/language.jpg?fit=crop&fm=pjpg&h=150&ixlib=php-3.3.1&w=150&wpsize=thumbnail&s=72a2514ff3371a8a8028c38f96c57ee8\",\"image\":{\"@type\":\"ImageObject\",\"url\":\"https:\\\/\\\/noemamag.imgix.net\\\/2022\\\/12\\\/language.jpg?fm=pjpg&ixlib=php-3.3.1&s=0c2fd9b37a429c9d10f61a3c1c31cf7f\"},\"articleSection\":\"Uncategorized\",\"author\":[{\"@type\":\"Person\",\"name\":\"Jacob Browning\"}],\"creator\":[\"Jacob Browning\"],\"publisher\":{\"@type\":\"Organization\",\"name\":\"NOEMA\",\"logo\":\"https:\\\/\\\/www.noemamag.com\\\/wp-content\\\/uploads\\\/2020\\\/06\\\/cropped-ms-icon-310x310-1.png\"},\"keywords\":[],\"dateCreated\":\"2022-08-23T14:17:57Z\",\"datePublished\":\"2022-08-23T14:17:57Z\",\"dateModified\":\"2022-12-10T00:31:53Z\"}<\/script>","tracker_url":"https:\/\/cdn.parsely.com\/keys\/noemamag.com\/p.js"},"_links":{"self":[{"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article\/33409","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article"}],"about":[{"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/types\/wpm-article"}],"author":[{"embeddable":true,"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/users\/1494"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/media\/38645"}],"wp:attachment":[{"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/media?parent=33409"}],"wp:term":[{"taxonomy":"wpm-article-type","embeddable":true,"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article-type?post=33409"},{"taxonomy":"wpm-article-topic","embeddable":true,"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article-topic?post=33409"},{"taxonomy":"wpm-article-tag","embeddable":true,"href":"https:\/\/www.noemamag.com\/wp-json\/wp\/v2\/wpm-article-tag?post=33409"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}