Many people are worried that AI is growing fast and becoming very powerful. A typical concern is the propensity for AI to be programmed to do something harmful, which could cause significant societal impacts. Luminaries in AI are still skeptical about the progression and sustainability of ASI in the long run. According to a recent study published in the Journal of Artificial Intelligence Research in January 2021, researchers from premier institutes such as the Max Planck Institute concluded that it would be almost impossible for humans to contain ASI.
The team of researchers explored the recent developments in machine learning, computational capabilities, and self-aware algorithms to map out the true potential of ASI. They then performed experiments to test the system against some known theorems to evaluate whether containing it is feasible, if at all possible.
Nevertheless, if accomplished, superintelligence will usher a new era in technology with the potential to initiate another industrial revolution at a jaw-dropping pace. Some of the typical characteristics of ASI that will set it apart from other technologies and forms of intelligence include:
- ASI will be one of the finest and possibly the last inventions that humans would ever need to make as it will constantly evolve to become more intelligent.
- Superintelligence will accelerate technological progress across various fields such as AI programming, space research, discovery and development of medicines, academics, and plenty others.
- ASI may further mature and develop advanced forms of superintelligence that may even enable artificial minds to be copied.
- ASI, in the future, may also lead to technological singularity.
Potential Threats of ASI
While ASI has numerous followers and supporters, many theorists and technological researchers have cautioned on the idea of machines surpassing human intelligence. They believe that such an advanced form of intelligence could lead to a global catastrophe, as shown in several Hollywood movies such as Star Trek and Matrix. Moreover, even technology experts such as Bill Gates and Elon Musk are apprehensive about ASI and consider it a threat to humanity.
Here are some of the potential threats of superintelligence.
- Loss of control and understanding
ASI systems could use their power and capabilities to carry out unforeseen actions, outperform human intellect, and eventually become unstoppable. If something goes wrong with any one of these systems, we won’t be in a position to contain them once they emerge. Moreover, predicting the system’s response to our requests will be very difficult. Loss of control and understanding can thus lead to the destruction of the human race altogether.
- Weaponization of ASI
Governments around the world are already using AI to strengthen their military operations. However, the addition of weaponized and conscious superintelligence could only transform and impact warfare negatively. Additionally, if such systems are unregulated, they could have dire consequences. Superhuman capabilities in programming, research & development, strategic planning, social influence, and cybersecurity could self-evolve and take positions that could become detrimental to humans. For example, autonomous weapons, drones, and robots could acquire significant power. The danger of nuclear attacks is another potential threat of superintelligence. Enemy nations can attack countries with technological supremacy in AGI or superintelligence with advanced and autonomous nuclear weapons, ultimately leading to destruction.
- Failure to align human and AI goals
There lies a non-zero probability of ASI developing a destructive method to achieve the goal. Such a situation may arise when we fail to align our AI goals. For example, if you give a command to an intelligent car to drive you to the airport as fast as possible, it might get you to the destination but may use its own route to comply with the time constraint.
- Social and Ethical implications
Super-intelligent AI systems are programmed with a predetermined set of moral considerations. The problem is humanity has never agreed upon a standard moral code and has lacked an all-encompassing ethical theory. As a result, teaching human ethics and values to ASI systems can be quite complex.
Super-intelligent AI can have serious ethical complications, especially if AI exceeds the human intellect but is not programmed with the moral and ethical values that coincide with human society.
Ultimately, the development of ASI could mark a turning point in human history, ushering in a new era of unprecedented scientific and technological progress, while also presenting profound challenges and existential questions about the nature of intelligence, consciousness, and the role of humans in an increasingly advanced technological landscape.
As we stand on the precipice of this technological revolution, it is crucial that we approach the pursuit of ASI with a deep sense of responsibility, foresight, and a commitment to aligning these powerful systems with the values and well-being of humanity.