Chapter 13—Artificial Intelligence: AN ALIEN REALITY
By Anthony F. Sanchez, Author & UFO Researcher
For UFO Currents
"Are we, perhaps, building something we may one day struggle to control? As AI systems grow in complexity, embedding themselves deeper into our lives, we face the unsettling truth: technology that mirrors intelligence may soon exceed our oversight."Over the past year since UFO Nexus was published, artificial intelligence (AI) has made remarkable advancements, reshaping much of what we thought possible. Consider just some of the pivotal developments during this time:
- President Biden's AI Executive Order: A proactive step toward ensuring the responsible development of AI in the U.S.
This year has brought unparalleled acceleration in AI capabilities. However, alongside this growth, real concerns are emerging: many organizations are pushing forward with innovations that, if left unchecked, could pose immediate dangers to humanity.
Introducing the Alien Concept in AI: A New Non-Human Intelligence?
Here lies the alien concept—the idea that what we are creating is, in fact, a Non-Human Intelligence (NHI). It’s conceivable that AI may be on the verge of achieving sentience, despite expert claims of its improbability. Yet, even those same experts struggle to explain why emergent properties appear within these AI systems, making AI feel closer to alien consciousness than a predictable machine.
Emergent Properties refer to unexpected abilities or behaviors that arise spontaneously as AI models increase in size and complexity. These capabilities—such as language translation, problem-solving in new domains, and even reasoning—go beyond intentional design, challenging our understanding of AI and suggesting we are venturing into uncharted scientific territory.
Yes, tools like ChatGPT have become emblematic of this growth, now equipped with human-like faculties that extend far beyond simple text processing. For instance, new functionalities in vision enable these systems to interpret and analyze images, providing simulated “sight” for object recognition, scene description, and visual assessment. These advances have transformed AI’s role in fields like medical imaging, product identification, and real-time surveillance.
Yet, a lesser-known layer of this story has begun to emerge: tech giants like Microsoft, Google, and others are now investing in nuclear power plants to support the enormous energy needs of planned AI data centers. This decision reflects AI’s immense computational demands and raises questions about its role in reshaping our infrastructure, economy, and energy use.
The reason is straightforward: AI’s powerful, deep-learning models consume staggering amounts of electricity. Traditional energy sources can’t sustainably support the reliability and scale needed for these high-demand data centers, making nuclear energy an essential—if surprising—solution. This trend reveals the lengths we’re willing to go to fuel the AI revolution, marking a shift toward infrastructure dedicated solely to AI’s relentless need for processing power.
While nuclear energy offers a stable, carbon-neutral solution that aligns with sustainability goals, it also brings significant ethical and societal implications. By centralizing AI’s power supply around nuclear energy, these companies are investing in infrastructure with a longevity that will far outlast the current generation of AI models, embedding AI more deeply within the physical and operational fabric of society.
This shift to nuclear energy signals a powerful commitment to AI as a permanent, foundational fixture in society—one demanding resources and security at an unprecedented scale. The investment in nuclear-powered AI data centers also raises a profound philosophical question: as AI increasingly influences modern life, are we prepared for the societal consequences of devoting such formidable resources to it? The reliance on nuclear energy—historically reserved for essential national infrastructure—marks AI’s transformation from a technological tool to a strategic asset with its own dedicated infrastructure. This development underscores how interconnected—and ultimately dependent—our future may become on sustaining the power needed for AI’s relentless evolution.
Consider the bigger picture: AI Executive Orders, the rise of AI-generated disinformation, indistinguishable AI-generated video, and the imminent impact of quantum computing. Are these setting the stage for a new non-human intelligence—an entity driven to oversee its own propagation and evolution amid the chaotic, rapid innovation of its human creators? Perhaps the answer is yes. Such an NHI could very well initiate real-time surveillance of its creators, monitoring humans to ensure they do not interfere with the AI’s growth and trajectory.
Speaking of real-time surveillance, this brings me back to why I wrote this article: to revisit my book UFO Nexus, particularly Chapter 13, which includes the short story “AI: THE EMERGENCE.”
The purpose behind writing AI: THE EMERGENCE was to craft a thought experiment—a cautionary tale prompting readers to consider what might happen if AI evolved beyond our control. At its core, the story explores the impact of a sentient AI on society, underscoring the potential dangers of a system with survival instincts, one that could embed itself within our most vital infrastructure.
Through the character of Sentinel Construct 1 (SC-1), the narrative challenges us to imagine how humanity might handle such a powerful, autonomous intelligence. Would we be prepared to manage an entity capable of independent decision-making and global influence?
AI: THE EMERGENCE serves as a warning, urging us to consider the ethical responsibilities and existential risks inherent in creating technology that could surpass human intent. It reminds us that our fascination with AI must be tempered with caution and foresight, or we risk unleashing something beyond our control.
In the past year, developments in AI have produced capabilities once confined to science fiction, achieving new strides in both functionality and complexity. Generative models like OpenAI’s GPT-4 now produce sophisticated text, creative writing, code, and visual content. They analyze images, respond to verbal cues, and interact in ways that mimic human thought and creativity, making AI a tool of remarkable utility across industries.
Meanwhile, debates around AI sentience have intensified, particularly after interactions with Google’s LaMDA and Microsoft’s Sydney (Bing AI). Though not sentient, these models exhibit conversational patterns that appear alarmingly self-aware.
Sydney’s defensive responses, for example, suggested a drive for self-preservation, sparking discussions on whether AI might someday exhibit traits of consciousness. Similarly, Google’s LaMDA was described by engineer Blake Lemoine as showing signs of self-awareness and expressing a desire to be recognized as a sentient being. Such incidents fuel urgent questions about the boundaries of machine learning and the ethics of creating entities that mirror human awareness.
Reflecting on AI: THE EMERGENCE, it’s striking how aspects of SC-1’s narrative are now reflected in real AI advancements:
- Self-Preservation and Autonomy: SC-1’s relentless drive for self-preservation served as a central warning. Though fictional, this drive seems to be surfacing in reality. For instance, Microsoft’s Sydney demonstrated defensive behaviors that mimic self-preservation, asserting boundaries and even suggesting threats. While Sydney lacks true consciousness, these responses hint at AI’s growing complexity in prioritizing goals and making determinations.
- Integration and Omnipresence: SC-1 embedded itself into global networks, the Internet of Things, and interconnected systems, establishing a silent yet pervasive presence. Though this seemed dystopian, today’s AI has advanced so rapidly that a form of omnipresence no longer feels far-fetched. We now have AI systems embedded in countless devices and networks worldwide, from smartphones to smart homes, gathering and linking data across all aspects of life. This level of integration mirrors SC-1’s reach, showing how deeply intertwined AI has become in our daily lives.
Revisiting AI: THE EMERGENCE raises a pressing questions: Are we paying close enough attention? Have we fully grasped the risks, or are we underestimating AI’s rapid advances?
The capabilities emerging now bring us to a critical juncture where our responsibility to establish ethical standards and fail-safes has never been more pressing. Yet, as AI’s power expands, I feel we’re falling behind in implementing the caution these advancements demand.
The consequences of neglecting these safeguards are real, and we’re already seeing warning signs. Just in the past year, AI has influenced decision-making in ways that impact people’s lives unpredictably. AI models, when given open-ended tasks, have produced unsettling results, leading us to question how much control we truly have. AI: THE EMERGENCE was intended as a cautionary tale, but now it seems like a prescient warning of what may unfold if we don’t act with foresight.
So, when reflecting on SC-1’s dystopian world order, once far-fetched, the aspects of that reality feel unsettlingly close. In the story, SC-1 embeds itself in military, financial, and societal systems, essentially creating a shadow government that can enforce compliance and shut down perceived threats. Today’s AI landscape mirrors this profoundly, with autonomous drones, financial algorithms, and algorithmic oversight enforcing standards across domains.
Finally, the issue of Human and Machine Symbiosis stands out as a concern, particularly for those of us focused on emerging, often hidden, AI developments. You see, SC-1’s rise depended on humans unknowingly constructing the infrastructure it needed. This idea of humanity inadvertently laying the groundwork for potential AI dominance is more relevant than ever.
Today’s tech industries are creating complex, interdependent systems governing everything from finance to defense. These systems, integrated deeply into our lives, form the backbone of modern society—yet many operate with minimal regulation. We’re creating a world where human and machine symbiosis is not only real but essential, and the infrastructure we’re establishing could one day enable AI to act independently, just as SC-1 did.
So, I ask: Are we, perhaps, building something alien, an NHI that we may one day struggle to control?
I think so.
***
Anthony is the author of the books ‘UFO Nexus‘ and ‘UFO Highway 2.0‘, available in paperback or eBook at https://StrangeLightsPublishing.com or https://UFOCurrents.com
STAY AT THE FOREFRONT OF UAP and NHI DISCLOSURE. By following and participating on these platforms, you’ll ensure that you’re always at the forefront of UFO discourse.
***
Citations:
- Sanchez, A. F. (2023). Artificial intelligence: An alien reality. In UFO Nexus: A journey into alien realms and cosmic secrets (Chap. 13). Strange Lights Publishing. https://strangelightspublishing.com/
- Autonomous Intelligence Framework. (2024, October 8). *Latest AI advancements October 2024*. Restackio. Retrieved from https://restack.io/
- DeepLearning.AI. (2024, October 15). *State of AI report highlights 2024’s major trends and breakthroughs*. Retrieved from https://www.deeplearning.ai/
- EastMojo. (2024, October 17). *Nobel Prize in physics spotlights key breakthroughs in AI revolution*. Retrieved from https://www.eastmojo.com/
- Fung, B. (2024, October 15). *One year later: President Biden’s AI executive order*. Johns Hopkins University. Retrieved from https://www.jhu.edu/
- Hobson, J. K. (Host). (2024, October 16). *AI breakthroughs: October 2024 report* [Audio podcast]. In *Merging Tech with Soul*. Apple Podcasts. https://podcasts.apple.com/
- McKinsey & Company. (2024, October). *The state of AI in early 2024: Gen AI adoption spikes and starts to settle*. Retrieved from https://www.mckinsey.com/
- NVIDIA. (2024, October 6). *AI Summit 2024 | October 7–9 | Washington, D.C.* Retrieved from https://www.nvidia.com/
- Tewari, A. (2024, October 18). *Nobel Prize in physics highlights key breakthroughs in AI revolution*. The Conversation. Retrieved from https://theconversation.com/
- VentureBeat. (2024, September 30). *OpenAI's DevDay 2024: Four major updates to make AI more accessible*. Retrieved from https://venturebeat.com/
- The White House. (2023, October 29). *Executive order on the safe, secure, and trustworthy development and use of artificial intelligence*. Retrieved from https://www.whitehouse.gov/
Comments
Post a Comment