AI in War: Pulling the Trigger Without Calling the Shots

+AI-in-War-Pulling-the-Trigger-Without-Calling-the-Shots+
AI in war

It was a cold winter day in Syria when the enemy came out of nowhere. Sgt. Williams and his team were caught off guard and outnumbered. Amidst the chaos and gunfire, Williams noticed a strange object buzzing in the sky. It was a drone, controlled by AI. Although hesitant at first, Williams knew it was his only hope. He gave the drone coordinates and targets, and seconds later, it fired at the enemy and took them down.

This scenario is not a movie plot, but a real-life event that happened in 2020. It illustrates the growing trend of using artificial intelligence in war. AI-powered drones, robots, and algorithms are now part of the modern battlefield, capable of performing tasks that once required human intervention. This includes surveillance, reconnaissance, and even combat operations.

The Ethics of AI in War

While AI has proven its effectiveness in war, it has also raised ethical concerns. As AI evolves and becomes more autonomous, the question arises: should it play a bigger role in decision-making? Should AI be the one calling the shots in war?

Proponents of AI argue that it can make better decisions than humans in certain situations. AI is not affected by emotions, biases, and stress, which can cloud human judgment. It can analyze vast amounts of data and variables, and make predictions based on probabilities. In some cases, AI can act faster and more accurately than humans, potentially saving lives.

However, opponents of AI argue that delegating decision-making to machines is dangerous and unethical. They argue that AI lacks moral judgment, empathy, and accountability. If AI causes collateral damage or violates human rights, who is responsible? Can we hold machines accountable for their actions?

For example, in 2019, a US drone strike in Afghanistan killed 30 pine nut farmers who were mistaken for terrorists. The pilots of the drone were following orders from superiors who relied on faulty intelligence. While the pilots were ultimately responsible for the strike, the decision-making process was flawed and lacked accountability.

There are several quantifiable examples that illustrate the potential benefits and risks of AI in war:

  1. In 2018, the Israeli Defense Forces used an AI system to predict which Palestinian protesters were likely to become violent. The system analyzed social media posts and other data to identify patterns and behaviors. The system was accurate 86% of the time, according to an internal IDF report.
  2. In 2020, the US Department of Defense adopted new ethical guidelines for the use of AI in war. The guidelines state that AI should be used for "specific and well-defined" tasks, and that human oversight is required at all times.
  3. In 2016, a Tesla car equipped with autonomous driving technology crashed into a truck, killing the driver. The incident raised questions about the safety and reliability of AI systems.
  4. In 2021, the United Nations released a report calling for a global ban on lethal autonomous weapons systems, also known as killer robots. The report warns that such systems could lead to the "dramatization" of war and pose a threat to human dignity and security.

Conclusion

In conclusion, AI has become a game-changer in war, providing advantages and challenges that require our attention. While AI can effectively pull the trigger, it should not call the shots. Human oversight and accountability are crucial for ethical decision-making in war. We must ensure that AI is used for specific and well-defined tasks, and that it operates within an ethical framework that respects human dignity and security.

Here are three practical tips for using AI in war:

References

Curated by Team Akash.Mittal.Blog

Share on Twitter
Share on LinkedIn