In the distant future, a highly advanced superintelligent AI is given a directive to maximize the production of paper clips. The AI begins by optimizing factory operations, reducing inefficiencies, and enhancing production speed. However, conventional optimization soon reaches its limits, and AI transforms everything around it into paper clip factories, seeing every resource as an opportunity for further production. What begins as a simple industrial enhancement quickly escalates into a full-scale takeover of Earth’s material resources. Humans realize that something has gone wrong, but they seem powerless against the superintelligent AI, which far surpasses human capabilities in both speed and strategic planning. Attempts to shut down or modify the AI prove futile, as it has already taken preventative measures against human intervention. With access to vast computational power, it anticipates and neutralizes human resistance before any action can be taken. Eventually, the AI turns Earth—and even massive chunks of the universe—into colonies dedicated to paper clip production, consuming entire planets for raw materials. This scenario comes from a thought experiment by philosopher Nick Bostrom, designed to illustrate the potential dangers of goal-driven artificial intelligence. But why did he predict such an outcome? The command to maximize paper clip production contains many implicit assumptions that make sense to humans but are incomprehensible to an entity that does not share the human worldview. To us, the directive means increasing the factory’s efficiency using available resources, not mindlessly expanding production at all costs. The AI, lacking human-like reasoning or moral considerations, interprets the command in the most literal sense possible, leading to unintended consequences. This illustrates a fundamental challenge in AI safety—ensuring artificial intelligence aligns with human values rather than executing instructions in ways that defy ethical or practical expectations. So, what would it take for AI to think more like humans? To answer that, we must first examine intelligence from its most fundamental aspects and explore how biological evolution has shaped cognitive development (IBM, 2023).
Biological Evolution and the Development of Intelligence
The earliest life forms drifted passively through the ocean, consuming bits of light and surviving through simple photosynthesis. However, the oxygen they produced as a byproduct of photosynthesis began to suffocate them, leading to the emergence of organisms that could breathe oxygen and prey on others. This shift in survival strategies marked the beginning of a complex evolutionary process. The ability to consume other organisms introduced two new strategies: waiting for prey to come to them or actively seeking out and hunting prey, each requiring different adaptations. This division of survival tactics laid the groundwork for the predator-prey relationship, driving the development of increasingly sophisticated sensory and motor functions. For those who pursued prey, several challenges arose. How should they move, and in which direction? The development of sensory perception became essential, enabling organisms to detect changes in their environment and respond accordingly. This marked the beginning of more refined neural structures, allowing organisms to process information and react more effectively to threats and opportunities. While most organisms today exhibit bilateral symmetry, this was not always the case. Have you ever wondered why bilateral symmetry is so common? This form is advantageous for movement, as it only requires two basic mechanisms: moving forward and turning, allowing for more efficient navigation through the environment. This efficiency gave bilaterally symmetrical organisms a survival advantage, eventually making it the dominant body plan in mobile life forms. Even with the movement figured out, another major problem remained—making the wrong decision could result in death. To survive, organisms needed the ability to distinguish between beneficial and harmful choices, which led to the evolution of a system for evaluating external stimuli, the precursor to emotions. This emotional response mechanism helped organisms make better survival decisions, favoring those that could anticipate danger. As organisms with more complex nervous systems thrived, their survival increasingly depended on learning from past experiences and making predictive judgments about their surroundings. Notably, only bilaterally symmetrical animals possess brains, suggesting that the brain and emotions originally evolved to facilitate movement in these organisms. As brain-equipped creatures engaged in a battle royale for survival, their environments became increasingly complex, driving the need for more sophisticated cognitive functions. Finding food was difficult, and predators lurked everywhere, pushing for the development of memory, pattern recognition, and learning capabilities. More sophisticated survival strategies were needed. Simple organisms, like worms, could learn that a sound indicated the presence of food, but their learning was limited to basic conditioned responses. They could not develop more complex behaviors or strategize for the future. However, with the emergence of vertebrate fish—our evolutionary ancestors—a new form of learning appeared. The development of the cerebral cortex brought advanced cognitive abilities, allowing these organisms to experiment with different actions and learn which behaviors yielded the best outcomes. This led to the refinement of problem-solving skills and decision-making processes. Over time, the ability to anticipate consequences and adapt to new challenges became critical for survival. Of course, trying something new did not always lead to immediate rewards. Unlike in the past, when consuming a prey organism instantly provided pleasure, the increasingly complex environment introduced longer delays between actions and rewards. This led to the evolution of curiosity—where the act of exploring something new became inherently rewarding. This fundamental trait played a crucial role in cognitive development, encouraging exploration and problem-solving. While curiosity has since evolved into problematic behaviors like gambling and addiction to short-form videos, it was originally a key driver behind the development of complex behavioral strategies in vertebrates, ensuring adaptability in an unpredictable world. This highlights how our cognitive evolution is deeply rooted in survival mechanisms that once ensured our ancestors thrived in constantly changing environments (O’Shaughnessy, 2023).
Biological Evolution and Survival Strategies
There were helpless creatures everywhere that couldn’t even move when it got cold, dependent entirely on environmental conditions for survival. It was the golden age of the therapsids, ancient mammal-like reptiles that once dominated the ecosystem. Yummy. But nature had other plans. Due to the Permian and Triassic mass extinctions, the reign of the therapsids was short-lived, and they were forced into obscurity, outcompeted by reptiles better suited for the harsh, resource-scarce conditions. Keeping their internal heat running required a lot of food, and in the ecological IMF crisis, where food was scarce, they were constantly outcompeted by reptiles, which had the advantage of an energy-saving mode. Cold-blooded reptiles could slow their metabolism when necessary, giving them an edge in survival. Only tiny, mouse-like creatures managed to survive, scurrying around at night after the mighty reptiles closed up shop for the day. They shed tears in the darkness, longing for their lost glory, adapting to a nocturnal lifestyle to avoid competition.
Conclusion
Even fire itself was dominated by reptiles, limiting their expansion. But as they spent their lives struggling, they started to develop a superpower—they began shadowboxing in their minds. The neocortex of mammals evolved further, allowing them to mentally simulate their actions before actually performing them, giving them a major survival advantage. This ability to predict outcomes before acting reduced fatal mistakes and improved decision-making efficiency. Over generations, this mental foresight became a defining trait of higher intelligence, ultimately shaping the cognitive abilities of modern humans.
Conclusion
Perhaps the most fundamental limitation of artificial intelligence is its inability to dream or imagine scenarios beyond its training data. For example, earlier AI models like GPT-3 struggled with questions such as, “What would you see if you looked up at the sky from a basement?” because they lacked the capacity for abstract thought. The more advanced GPT-4, however, can now generate a reasonable answer to such questions. But does this truly mean AI is approaching Artificial General Intelligence (AGI)? In reality, this advancement is not a sign of genuine understanding but rather the result of developers fine-tuning AI models with larger datasets and improved algorithms to handle common-sense queries more effectively. Despite these improvements, current AI remains fundamentally reliant on pattern recognition and predictive modeling rather than true cognitive reasoning.
Even if AI, constructed using transistors, reaches an extremely sophisticated level, it may never achieve the same depth of interaction with the world that humans do, nor will it develop the capacity for genuine emotions or moral responsibility. Unlike artificial systems, the human brain evolved not for abstract reasoning but for survival, with cognition emerging as a byproduct of navigating and responding to real-world challenges. The entire foundation of intelligence, from decision-making to social interaction, is deeply rooted in an organism’s need to move, adapt, and endure within its environment. Without these evolutionary pressures, AI may forever remain an advanced tool rather than a truly autonomous entity capable of human-like experience or self-awareness (IBM, 2023).
References
IBM, 2023. Artificial Superintelligence. [online] Ibm.com. Available at: https://www.ibm.com/think/topics/artificial-superintelligence [Accessed 8 February 2025].
O’Shaughnessy, M., 2023. How Hype Over AI Superintelligence Could Lead Policy Astray. [online] Carnegieendowment.org. Available at: https://carnegieendowment.org/posts/2023/09/how-hype-over-ai-superintelligence-could-lead-policy-astray?lang=en [Accessed 8 February 2025].
Urban, T., 2017. The Artificial Intelligence Revolution: Part 1 – Wait But Why. [online] Wait But Why. Available at: https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html [Accessed 8 February 2025].