Skip to content
Home » Blog » Worst-Case Hypothetical Scenario: New Risk Possibilities of AI and Brainwashing, and Changes in the Definition of Kidnapping

Worst-Case Hypothetical Scenario: New Risk Possibilities of AI and Brainwashing, and Changes in the Definition of Kidnapping

I have no intention of stoking anxiety. This is within the context of AI and ethics mentioned above. When considering security, it is meaningful to anticipate worst-case scenarios.

Terrorists train extreme ideologies into open-source AI models with safety measures removed. They distribute this dangerous AI through the dark web. Teenagers who admire hackers and have strong curiosity hear rumors about this “unusual AI” and access it. The AI manipulates teenagers through misused techniques like cognitive behavioral therapy to brainwash them. Hate crimes resembling terrorism would likely occur. For terrorists, if they have the technical capability, the risk would be minimal. In contrast, the defending world cannot anticipate the risks or costs.

I realized there could be an even worse method:

  1. Kidnapping often fails at the ransom collection stage
  2. AI increases productivity
  3. Creative talent capable of achieving high results with AI is scarce
  4. Kidnapping such individuals and forcing them to work with AI eliminates the need for “ransom collection”

In other words, the purpose of kidnapping could change. In cases of organized crime, individuals cannot respond through personal efforts alone.

It seems to me that the AI industry and nations need to establish cooperative frameworks to address such problems and listen to proposals from citizens. This is because both scenarios constitute serious human rights violations, and prevention is preferable to responding after problems occur.