AI Is Poised to Take Over Language, Law and Religion, Historian Yuval Noah Harari Warns

AI Is Poised to Take Over Language, Law and Religion, Historian Yuval Noah Harari Warns

Source: Decrypt

Published:2026-01-20 22:07

BTC Price:$89538

#AI #Crypto #FutureOfFinance

Analysis

Price Impact

Low

The news article discusses philosophical and societal implications of ai, particularly its long-term potential to influence systems built on language like law, religion, and finance. while crypto is part of the financial landscape, harari's warnings are broad and theoretical, not directly addressing current market mechanics, adoption rates, or immediate regulatory changes specific to cryptocurrencies. the discussion is high-level and does not provide immediate catalysts for price movement.

Trustworthiness

High

Yuval noah harari is a highly respected historian and public intellectual known for his insights into humanity's future. his warnings are taken seriously in academic and policy circles. the discussion at the world economic forum adds to its credibility as a significant long-term concern, even if the immediate market impact is low.

Price Direction

Neutral

The article is a high-level philosophical discussion about the future impact of ai on society. it does not contain any specific news, data, or events that would directly influence short-term crypto prices in either a bullish or bearish direction. the arguments presented are abstract and focus on long-term societal shifts.

Time Effect

Long

Harari's warnings explicitly point to a future where decisions need to be made 'now' to avoid consequences 'ten years from now'. the impact described is a fundamental shift in how societies and institutions operate, which will unfold over a long period. any direct implications for the crypto market stemming from this perspective would be long-term, related to ai's evolving role in finance, governance, and potentially even autonomous crypto agents.

Original Article:

Article Content:

In brief Harari said AI should be understood as active autonomous agents rather than a passive tool. He warned that systems built primarily on words, including religion, law, and finance, face heightened exposure to AI. Harari urged leaders to decide whether to treat AI systems as legal persons before those choices are made for them. Historian and author Yuval Noah Harari warned at the World Economic Forum on Tuesday that humanity is at risk of losing control over language, which he called its defining “superpower,” as artificial intelligence increasingly operates via autonomous agents rather than passive tools. The author of “Sapiens,” Harari has become a frequent voice in global debates about the societal implications of artificial intelligence. He argued that legal codes, financial markets, and organized religion rely almost entirely on language, leaving them especially exposed to machines that can generate and manipulate text at scale. “Humans took over the world not because we are the strongest physically, but because we discovered how to use words to get thousands and millions and billions of strangers to cooperate,” he said . “This was our superpower.” Harari pointed to religions grounded in sacred texts, including Judaism, Christianity, and Islam, arguing that AI’s ability to read, retain, and synthesize vast bodies of writing could make machines the most authoritative interpreters of scripture. “If laws are made of words, then AI will take over the legal system,” he said. “If books are just combinations of words, then AI will take over books. If religion is built from words, then AI will take over religion.” In Davos, Harari also compared the spread of AI systems to a new form of immigration, and said the debate around the technology will soon focus on whether governments should grant AI systems legal personhood . Several states, including Utah, Idaho, and North Dakota, have already passed laws explicitly stating that AI cannot be considered a person under the law. Harari closed his remarks by warning global leaders to act quickly on laws regarding AI and not assume the technology will remain a neutral servant. He compared the current push to adopt the technology to historical cases in which mercenaries later seized power. “Ten years from now, it will be too late for you to decide whether AIs should function as persons in the financial markets, in the courts, in the churches,” he said. “Somebody else will already have decided it for you. If you want to influence where humanity is going, you need to make a decision now.”  Harari’s comments may hit hard for those fearful of AI’s advancing spread, but not everyone agreed with his framing. Professor Emily M. Bender , a linguist at the University of Washington, said that positioning risks like Harari did only shifts attention away from the human actors and institutions responsible for building and deploying AI systems. “It sounds to me like it’s really a bid to obfuscate the actions of the people and corporations building these systems,” Bender told Decrypt in an interview . “And also a demand that everyone should just relinquish our own human rights in many domains, including the right to our languages, to the whims of these companies in the guise of these so-called artificial intelligence systems.” Bender rejected the idea that “artificial intelligence” describes a clear or neutral category of technology. “The term artificial intelligence doesn’t refer to a coherent set of technologies,” she said. “It is, effectively, and always has been, a marketing term,” adding that systems designed to imitate professionals such as doctors, lawyers, or clergy lack legitimate use cases. “What is the purpose of something that can sound like a doctor, a lawyer, a clergy person, and so on?” Bender said. “The purpose there is fraud. Period.” While Harari pointed to the growing use of AI agents to manage bank accounts and business interactions, Bender said the risk lies in how readily people trust machine-generated outputs that appear authoritative—while lacking human accountability. “If you have a system that you can poke at with a question and have something come back out that looks like an answer—that is stripped of its context and stripped of any accountability for the answer, but positioned as coming from some all-knowing oracle—then you can see how people would want that to exist,” Bender said. “I think there’s a lot of risk there that people will start orienting toward it and using that output to shape their own ideas, beliefs, and actions.” Daily Debrief Newsletter Start every day with the top news stories right now, plus original features, a podcast, videos and more. Your Email Get it! Get it!