War is never easy, but it is even more complicated when artificial intelligence (AI) is involved. AI is becoming more widespread in the military, performing tasks like surveillance, intelligence analysis, and even target selection. However, AI lacks a fundamental understanding of the rules of war, which could have dangerous consequences.
The Story of Willie and Joe
One of the most famous cartoons in military history is Willie and Joe, created by Bill Mauldin during World War II. The cartoon depicts the daily life of two infantrymen in the front lines, and it was well-loved by soldiers and civilians alike for its humor and realism.
In one particular cartoon, Willie and Joe are hiding in a foxhole as a battle rages around them. Nearby, a tank is firing at an enemy position with a large gun. Suddenly, the gun explodes, killing the tank crew and sending shrapnel flying in all directions. Willie and Joe look at each other in dismay, realizing that they are trapped and have nowhere to go.
This cartoon illustrates a crucial aspect of the rules of war: the prohibition of indiscriminate attacks. According to international humanitarian law, parties to a conflict must not attack civilians or civilian objects, and they must limit their attacks to military targets. This principle applies even more strictly to weapons that are likely to cause unnecessary suffering, such as nuclear, biological, and chemical weapons.
However, AI does not have the same understanding of the rules of war as humans do. AI algorithms are designed to maximize certain objectives, such as hitting as many targets as possible or minimizing collateral damage. While these objectives may seem reasonable in certain scenarios, they can lead to disastrous consequences when the situation is more complex.
- In 2015, a Human Rights Watch report documented the use of Russian-made cluster munitions in Syria, which led to the death and injury of hundreds of civilians. Cluster munitions scatter small bombs over a wide area, which makes them highly lethal to both military and civilian targets. The problem is that cluster munitions often leave behind unexploded ordnance, which can be triggered by civilians long after the conflict has ended. Since cluster munitions are indiscriminate by nature, they violate the rules of war, but some countries still use them.
- In 2018, Israel's defense minister announced that the military had started using AI to forecast Palestinian unrest in the West Bank. The AI system uses data from social media, news sites, and other sources to predict where violence might erupt, allowing the military to send troops to the area in advance. While this may seem like a useful application of AI, it raises questions about human rights and freedom of speech. Critics argue that the AI system could end up targeting Palestinian activists or journalists who are critical of the Israeli government, leading to their arrest or detention without trial.
- In 2020, the United Nations warned that the use of autonomous weapons could lead to a "third revolution in warfare", after gunpowder and nuclear weapons. Autonomous weapons are weapons that can select and engage targets without human intervention, using sensors and algorithms to decide whom to attack. This raises serious ethical and legal issues, as autonomous weapons would not be able to distinguish between combatants and civilians, or between legitimate targets and protected sites. Moreover, autonomous weapons could be prone to malfunction or hacking, leading to unintended consequences.
Conclusion
In conclusion, the age of AI has brought new challenges to the rules of war. While AI can bring many benefits to the military, it can also lead to unintended consequences if not properly regulated. To avoid these risks, it is essential that policymakers, military leaders, and the public engage in a transparent and informed debate about the use of AI in warfare.
- We need to develop ethical and legal guidelines for the use of AI in warfare, based on the principles of human rights, humanitarian law, and the protection of civilians.
- We need to ensure that AI systems are transparent, accountable, and subject to human oversight, so that we can understand how they work and correct errors or biases.
- We need to promote international cooperation and dialogue on the regulation of AI in warfare, to prevent an arms race or a breakaway from the existing legal framework.
Curated by Team Akash.Mittal.Blog
Share on Twitter Share on LinkedIn