Recently, headlines have focused on advances in artificial intelligence. But AI has been around for decades, and it will continue to be a big part of how war is fought in the future for a range of military and defense applications.
It’s not only militaries; you may be reading this text on a device that already heavily relies on artificial intelligence; thus, you have probably already utilized AI in some capacity even before the recent hoopla.
AI has most likely been employed in any situation where you have paid for anything online, for example for food or for entertainment, utilized your face or fingerprint to unlock your phone, engaged in social media, or scheduled travel via a phone application.
We have become accustomed to AI in various ways, incorporating it—often without realizing it—into our daily lives.
Military AI Has Existed Since World War II
Although weapons have been able to make some decisions since World War II, artificial intelligence is now enabling them to make far more decisions and will eventually make them commonplace.
What if someone were to be attacked and their identity was determined by such facial recognition software? Instead of identifying the greatest restaurant to eat at, what if similar software guides planes to launch an airstrike on a target?
Also read: Does adoption of AI military systems predict a sinister turn in warfare?
This looks like a really big problem so far. The question is, could AI systems really be making life-or-death decisions on their own? Their accuracy is not perfect and varies depending on the situation, and we know that they are not always correct.
In actual conflict zones, AI is already choosing which people to target, like Gaza and Ukraine. Israel is said to have used an artificial intelligence (AI) system named Lavender, which has minimal human decision-making applicability, to select possible targets, leading to a significant number of civilian casualties, according to research published in the Israeli news outlet +972 Magazine.
Machines Can Do It Coldly
Recently, Austria held a conference to address regulations on AI weapons to address the devastating effects they can have. AI is already fighting and targeting humans and their settlements. The Austrian Foreign Minister, Alexander Schallenberg, said,
“We cannot let this moment pass without taking action. Now is the time to agree on international rules and norms to ensure human control.”
Source: Reuters.
The technology that many militaries are deploying in war zones is still far from maturity and is not yet being used to decide whom to target. The technology that we call intelligent is not yet as capable as a cat’s brain; even experts say that if we achieve the intelligence level of a mouse in the coming years, that will be a big achievement.
But the level of dependence on it is so high that it is brutally deployed in a way that dehumanizes both the target and the one with the firepower. It raises serious questions about moral, humanitarian, and ethical concerns.
As humans, we have grievances, but to prevent a grieving soldier, militaries can now use this new weapon, as +972 Magazine mentions a secret source from intelligence who said,
“The machine did it coldly. And that made it easier.”
Source: +972 Magazine.
Is it really important for people to be involved in the selection process? When asked by another Lavender user, he said that at that point, he would dedicate 20 seconds to each target and complete dozens of them each day. As apart from being an official seal of approval, he had no additional value as a human. It was quite time-saving.
It’s easy to understand how much decision power we humans are willingly surrendering to machines.
International Agreements Are Still Far Off
The United States has initiated regulatory efforts through the implementation of the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. The proclamation also includes Australia. An international agreement, however, is still very far off.
The declaration mentions that ensuring safeguards is essential, as it states that,
“States should implement appropriate safeguards to mitigate the risks of failures in military AI capabilities, such as the ability to detect and avoid unintended consequences and the ability to respond, for example, by disengaging or deactivating deployed systems, when such systems demonstrate unintended behavior.”
Source: US Department of State.
The document also stresses that AI use in the military “can and should be” ethical, along with enhancing international security. But the ground realities show a different picture altogether, as discussed above.
Finding and locating enemy military personnel is one application of such systems; with the blasting innovation going on, we have no idea what other complex military uses could come out.
Also read: Kaspersky denies the claims for helping Russian military
The debate to rule out AI from warfare doesn’t seem to be an option at this point in time, but lawmakers around the world and governments involved in conflicts must adhere to the lack of capabilities that AI has and the potential havoc it could wreck on us humans.
Currently, policymaking is failing to match the pace of innovation, and that needs to be addressed to some extent, if not completely, as we cannot just sit and say, wow! AI made another 1,000 kills. What will AI present? We don’t know yet, but what policymakers should be doing is not a question for geniuses only.
Cryptopolitan reporting by Aamir Sheikh
Land a High-Paying Web3 Job in 90 Days: The Ultimate Roadmap