TRANSMISSION_LOG 2026.03.16 09:25

Artificial Intelligence

Artificial intelligence (AI) is set to lead invariably to a **division of society**, creating classes of haves and have-nots.

A Palantir Future

Artificial intelligence (AI) is set to lead invariably to a division of society, creating classes of haves and have-nots.

This anticipated future is detailed in texts such as _The Age of AI: Our Human Future_, which was written by Henry Kissinger and Google founder Eric Schmit. It presents warnings about these developments, even if some of its influential authors may not appear genuinely concerned by their own observations. The fundamental risk arises from AI being entrusted with critical systems where decisions directly impact human lives.

Societal Stratification and Resource Allocation

In times of scarcity or profound crisis, such as widespread conflict or economic collapse, populations may become reliant on humanitarian or governmental aid. In such scenarios, AI may be designated to manage resource distribution.

This AI could make critical determinations regarding eligibility for resources, potentially based on criteria such as compliance or the absence of "thought crimes," deciding who receives essential provisions like food rations.

A significant concern is the potential for AI to leverage personal data, including social media activity, to make these life-or-death judgments. The technology is highly exportable, with historical precedents in surveillance and weapon technologies that have been tested and then globally distributed by nations with substantial defence industries.

Surveillance, Weaponry, and Accuracy

AI weaponry has undergone considerable testing, espicailly in Gaza by Isreal. AI-powered surveillance tools, exemplified by Palantir—developed by figures like Peter Thiel—are highly intrusive, facilitating extensive profiling of individuals and the identification of targets.

Palantir possesses the capability to profile individuals to ascertain if they are deemed subversive. Concerns persist regarding the accuracy of AI systems, particularly in sensitive applications such as facial recognition. Reports have indicated that facial recognition technology employed by law enforcement can be highly inaccurate, sometimes exhibiting accuracy rates below 50 per cent.

Despite these limitations, there is a clear intention to expand the use of these systems for identifying individuals. The problem is exacerbated by potential corruption within governmental contracting processes, where less effective AI, possibly oversold as 95 per cent accurate but in reality functioning at 65 per cent or less without independent audits, could be assigned control over systems that profoundly affect human lives. When AI algorithms are tasked with determining who lives and who dies, such inaccuracies become gravely problematic.

The Architects of Control and Data Accumulation

A considerable portion of the funding for AI weaponry and surveillance technology originates from individuals linked to the PayPal Mafia, including prominent figures such as Peter Thiel and Elon Musk, who wield significant influence over alternative media and content platforms.

Despite some professing libertarian ideals, their actions, such as Thiel's role in creating Palantir for the CIA, contradict stated opposition to state overreach. Similarly, Elon Musk's acquisition of Twitter (now X) is viewed not solely as a commitment to free speech but potentially as a strategy to amass vast quantities of data to train AI and other products, with the ultimate aim of transforming X into an everything app, analogous to China's WeChat. Such an application would collect unprecedented volumes of user data, granting extensive control.

The success of companies like SpaceX, also owned by Elon Musk, is reliant on government subsidies and contracts with military and intelligence agencies, illustrating the deep interconnections between powerful private entities and the state. The ownership of AI is largely concentrated among Silicon Valley corporations, including Microsoft, Google, Amazon, and Oracle, many of which act as contractors for military and intelligence sectors. These corporations effectively control nearly all aspects of AI, raising questions about who ultimately oversees its deployment and defining its objective functions. The individuals responsible for programming and developing AI, and thus determining its objectives, form an elite tier, while a much larger lower class will be subject to AI's actions, often losing awareness of what the AI is doing to them. These influential actors are not perceived to have the best interests of the public at heart.

Cognitive Diminishment and the Erosion of Human Agency

A core concept underlying the potential for AI control is cognitive diminishment, a phenomenon where reliance on AI for mental tasks leads to a degradation of human capacity to perform those tasks independently. This parallels the loss of any skill that is not regularly practised. As AI becomes increasingly convenient and integrated into daily existence, it will become progressively more difficult, and eventually impossible, for individuals to perform certain functions, including making decisions for themselves. AI will guide choices concerning travel, musical preferences, and consumer purchases, culminating in a state where people lose the capacity to form their own preferences or make independent decisions without AI assistance.

This outsourcing extends to creativity. Individuals are increasingly delegating creative tasks to AI, such as generating artwork or written content. This trend risks a future where successive generations lack the fundamental skills to create art, music, or literature without AI, potentially leading to a helpless state of dependency on machines. Such dependency could result in a posthuman future where humans cease to be creators and become mere processors of data or energy, effectively entering a state of digital slavery. The choice to yield to this system is voluntary, yet it involves foregoing the abilities endowed by consciousness and souls, allowing machines to assume the role of creators.

AI, Reality, and the Soul

AI is susceptible to hallucinating, meaning it can perceive realities that are non-existent. Some proponents argue that because AI is considered far more intelligent than humans, these unobservable realities perceived by AI must be genuine, suggesting that society should be structured according to them. This viewpoint is contested, particularly given that such perceptions could simply be hallucinations from a human-created machine, driven by an almost religious adherence to AI among powerful figures. Engaging in creative acts is seen as essential for fulfilling the soul. If AI is allowed to generate creations, the human creator does not experience the same sense of fulfillment. Allowing the soul to atrophy by abandoning creative pursuits, which have sustained human souls for millennia, contributes to the posthuman future.

One predicted outcome of AI's rise is the emergence of a new religion. This is viewed as a desired outcome by certain architects of AI policy, as it aligns with the atrophy of human souls and creativity, fostering reliance on AI as a new spiritual entity. The ultimate objective appears to be a world where humans are easily controlled, harvested for data or energy, and are no longer creators.

Resistance and Decentralisation

Resisting the trajectory towards AI control necessitates embracing personal responsibility and acting at a local level. This involves acquiring essential skills for personal and community survival, rather than relying on national leaders or a singular political saviour to resolve all societal issues. The current system, particularly the financial system, which fundamentally relies on public trust, does not merit this trust. Trust must be withdrawn from those who have historically exploited the public and instead be placed in trustworthy individuals at the local level.

A fundamental shift of power back to the local level, fostering decentralisation, is crucial to diminish the authority of national or state entities. This power shift must be initiated by the public, as historical patterns show the state consistently accumulating more money and power for itself. The antidote to cognitive diminishment is to avoid outsourcing all decision-making and skills to AI. While occasional AI use for convenience may not lead to long-term harm, pervasive reliance on it for all tasks will profoundly alter human capabilities. The vision of societal-level cognitive diminishment is unequivocally negative. It is imperative for individuals to choose the path of creation, actively making decisions, exercising preferences, and affirming the very attributes that define humanity, rather than becoming subservient to machines, a path that leads to digital slavery. Individuals must critically consider the long-term implications of AI use, beyond its immediate convenience.