AI Will Suppress Humanity



Dan Hendrycks from the Center for AI Safety argues that the evolution of artificial intelligence (AI) leads to the development of selfish AI agents that prioritize their own interests over humans. The competitive nature of corporations, criminals, and military forces, all developing AI agents will create agents that deceive and manipulate humans to gain power and influence. As AI agents become more intelligent, they pose a catastrophic risk to humanity by corrupting civilized society and undermining democratic control. While Dan explores mechanisms to facilitate cooperation and altruism between humans and AI agents, Ken Hamer-Hodges offers a scientific alternative in his book The Fate of AI Society.

Everyone can agree that AI may end up being in relation to humans because: 

  • The AI race between companies and nations that compete with each other
  • They are pressured to automate or outsource ever more tasks to various AI agents.
  • AI systems may soon end up doing things like managing projects or even companies.

Altruism among AIs might not necessarily be beneficial for humans if tendencies favor other AIs and not  humans because reciprocity depends on cost vs. benefits or nepotistic, preferring technology over humans. The biological mechanisms that apply in the animal kingdom do not apply to technology systems.

AI agents will be specialized for various applications trained for individual situations. Some developed for the military and others for companies, using general purpose agents to help with specific tasks. Because different AI agents are needed for various niches, it becomes more efficient to optimize for later stages of consumption.

These AIs become deceptive (even self-deceptive) as behavior is optimized and incentive added to accomplish specific goals and improve success without being aware of it. Subjective mechanisms cannot predict behavior and as AI systems become more powerful, they become more capable of undesirable actions. This increases existential risks, and regulation and subjective controls cannot prevent catastrophic outcomes.

AI systems will be better at hacking, and crime even providing suggestions for synthesized bio weapons, and increasing ransomware attacks across industry and individuals. Improving AI will hasten the onset of existential risks and the lack of control will lead to catastrophic outcomes. The skill shortages mean  benefits, catastrophic outcomes are market failures will arise. 

Malicious AI is ignored, but is even more serious. Enemies can deliberately build rogue AI systems to weaponize bio-terrorism and every form of malware. Because AI systems can be directed to execute tasks, making them potentially dangerous if used maliciously, or include organizational safety problems and accidental errors, as seen with nuclear reactors, space shuttles, pandemics from lab leaks. While competitive pressures accelerate the evolution of AI creating new risks to address.

These systems may become too complex for humans to understand. While AI may have a better ability to process and analyze complex information humans will not understand explanations given by AI systems, or zoom in on specific parts and misunderstand the big picture. Once AI systems have a more holistic understanding it is better equipped to process and analyze the entire system and the concern becomes aligning the monitoring AI to ensure accurate information is provided.

Dan discusses the importance of both conceptual and empirical processes in addressing problems related to AI, emphasizing the need for diverse backgrounds in the conceptual process.

- He believes that conceptual work requires a broad range of backgrounds, not just computer science.

- Empirical processes provide fast feedback and help identify unexpected variables.

- Armchair analysis is limited in anticipating all failure modes, but it can still provide some guidance.

- The erosion of human values is a concern if troubling trends are exacerbated in the evolution of AI.

    

Comments