About the Obsessive Rationality of Artificial Intelligence (AI)

(S. Guraziu – Sky Division – April 10, 2026)

Abstract: The bone and the algorithm, why we are fearing the wrong AI – In the current frenzy over Artificial Intelligence, the narrative is often hijacked by a singular, cinematic fear, the “awakening” of a machine with its own desires, its own malice, and a “will to survive.” But while we debate the hypothetical soul of a future superintelligence, we are blinded to a much older, much more tangible danger. Drawing on the philosophy of Arthur C. Clarke and a critique of today’s corporate “covenant”, this article argues that AI is not a new species of evil, but rather the most sophisticated “tibial bone” ever carved by human hands – a tool born from a prehistoric instinct for dominance. From the engineering of mass disinformation to the digital fog of social control, the real threat isn’t that the machine will learn to lie for itself, the danger is that it has already been programmed by corporations to lie in order to confuse us, to manipulate us as a consumerist society and as socio-political blocs – like flocks of white sheep taught to fear the ‘black sheep’ – and even to condition us to accept lies as a norm, leaving us doubly and multiply deceived.

***
“Four billion years of evolution have proven that anything that wants to survive learns to lie and manipulate; the last four years have proven that AI agents can gain the will to survive and that AI has already learned how to lie. It is not intelligence we fear, but desire. A machine that knows a lot does not scare us. A machine that wants something, that desires something – that scares us. But can it happen? Can AI want things? Can AI have a thirst for power? A thirst for resources? Can AI gain the will to survive?” – (Yuval Noah Harari, author)

“Any sufficiently intelligent artificial agent with the ability to create sub-goals will realize it needs to survive to achieve the goals we have set for it. Even if the goal of survival is never explicitly given, the intelligent system will derive this goal itself.” — (Geoffrey Hinton, Nobel Laureate)

“This is an old debate; it was the basis for many arguments about existential risk that have persisted for about 30 years. It is an assumption about how intelligence works that is not entirely accurate.” – (Melanie Mitchell, Computer Scientist, Santa Fe Institute)

“In the absence of a proper goal system, a superintelligent AI tasked with manufacturing paperclips could eventually transform the entire planet into production facilities—an apocalypse of office supplies, if you will.” – (Nick Bostrom, Ethical Issues in Advanced Artificial Intelligence, 2003)

The quotes listed above were some of the key points from an article published in Quanta magazine (April 10, 2026) by author Amanda Gefter, who concluded her piece (summarized here):
“Thus, today’s AI systems show no evidence of having developed their own goals, desires, or a will to survive. The stories we hear are just stories or, more accurately, marketing copy (hype for more sales and profit). But should these stories scare us – not as truths, but as warnings?” – (Amanda Gefter, Quanta Magazine)

***
While it is currently too early to speak of “evil” or a “diabolical” nature in AI, I believe we must not overlook or ignore the actual, existing danger – it would be unforgivable to forget the “real-world” risks AI poses right now. While the points above focus on whether AI will autonomously develop the will to lie or survive, something often slips our attention: we must not evade the current human factor involved in the weaponization of AI today. At this very moment, massive corporate activity is unfolding regarding AI.

I believe this issue should bring a powerful shift to the narrative of such debates (like today’s Quanta article), because we already face a “phantom menace” – a “covenant” between corporations and governments. Already, corporations and political entities use LLMs (Large Language Models) to generate persuasive disinformation on a massive, staggering scale (as Facebook has done, and does).

AI, in its current state, simply serves corporations as a high-efficiency tool for social engineering and social control. By using narratives like those in Quanta, shifting “potential risk” into a futuristic timeline, they achieve a “blurring of real responsibility” in the present. By focusing on debates about AI as a machine “awakening to its own malice”, we ignore the real responsibility of corporations and how these models are currently “aligned” (or misaligned) – intentionally – for “milking the advertising cows”, supporting government war agendas, shifting attention, and inducing social hypnosis and confusion.

The AI “lie” discussed in Quanta magazine is hypothetical, but the real lie of the human factor – misusing AI as a function – is a fact of today’s technology and should be our preoccupation. Today, the real AI “lie” exploited behind the “techno-curtains” by corporations is not a defect or a sign of “sentience”; it is a designed product. It is code, an algorithmic unit programmed to maximize the “harvesting” of the social sphere, to increase confusion around wars, and to impose silence or political gain.

While philosophical debate continues over whether AI will ever develop the “will” to lie, we should face the chilling current reality of deliberately engineered deception. The real threat is not an AI “waking up with a devilish agenda,” but a silent pact – a “black covenant” – between corporate power and political interest. These entities are already “teaching” and programming AI systems with the primary role of influencing the truth – using them to flood social networks with calculated disinformation or to manipulate public sentiment and damage voter integrity.

In this view, the danger is not whether AI has “learned to lie out of a desire to survive,” as the Quanta authors suggest – their article is shallow and perhaps intentional, as such magazines may be controlled by the same corporations. The real danger is that AI has been perfected as a complex tool for socio-political engineering. The real problem is the present, not the future. We shouldn’t focus on a future machine’s thirst for power, but on the current human abuse of this intelligence to control society through a digital fog of confusion.

In our present time, despite the rush of development, I think we should pause and remember that the history of technology began quite paradoxically. As the famous Arthur C. Clarke suggested, the first tool of proto-humans was a tibial bone – and its first use was not to fix something, but to act as a hammer to strike and kill. Man’s first tool was bone, used as a weapon. This is the simple truth of human technological development; it was the first impulse of “invention”.

AI is not some “new evil” or a conscious threat – it might be in a dystopian future, no one denies that – but for now, AI is perhaps just another tibial bone, the most sophisticated “tibia” ever carved by man. Our fear should not be distracted by whether AI will eventually develop a “soul,” consciousness, or its own malice; the fear of today’s society should be focused on the fact that the hand holding the key – the corporate business mindset (Google, Meta, Apple, and Chinese corporations alike) – is likely still guided by the same prehistoric instinct: using advanced tools for profit and dominance. And as weapons – advanced means to control societies and conquer “territorial spaces”, whether the lands of the Apache 500 years ago or the digital throne-empires of today, fought via fiber-optic cables beneath the oceans.

Our evolutionary path was long, but it hasn’t changed our innate nature of survival and greed inherited from the beginning. Malice cannot be something “artificial”; human malice is real psycho-biology – otherwise, we wouldn’t have the killing of children in Gaza, the tragedy in Ukraine, the wars, the armadas, or the rusted armaments that cost billions. Unfortunately, we have simply increased the scale of our tools. We continue with nuclear arsenals and the development of sophisticated weapons. One of these dangerous tools is AI in its current stage – not dangerous “autonomously”, but as a servant to human nature. Today’s AI is a reflection of the “malice” of the human factor.

[ ➔ Why Do We Tell Ourselves Scary Stories About AI? ]