Defence systems are the first frontier where AI is being deployed at a fast pace. It has already started a race amongst nations to claim their superiority in using AI for various war tactics.
As part of the initiative for ethical and just AI, scientists have already recommended that all critical systems involving human health, human safety and human life should have human in the loop (in driver seat). That means that the AI system should not be able to trigger the action, but the final decision should be left to the human having overriding control over that AI system.
Recently it has been observed that commercial off the shelf drones/robots are available which have fire and forget capability. It does not require any connection between the operator and the drone’s control system. These drones/robots have intelligence, surveillance, reconnaissance and kill capabilities. As per the UNSC report, Libyan forces used drone weapon systems that were free to choose their targets based on computer vision algorithms. These unmanned drones were used to target breakaway rebel factions. In such systems, the targets may be identified by their dress, the kind of explosives being carried by them, and based on specific formations of their groups. AI as a technology is not fool proof and can err based on the training data, lack of clarity in objectives and change in the ground situation. It is very risky to use such systems without human intervention and can lead to undesirable situations for both sides of the game. Any such mistakes can prove costly as they have the potential to trigger chain events that can potentially start larger conflicts between different stakeholders.
The recent conflict between Israel and Palestine is being touted by Israeli media as AI dominated operation. The major intervention of the technology was in real-time feedback from the ground using satellite imagery, using data science for the tons of data coming in from different streams and sources, choosing the targets based on the impact assessment and pinpoint the location of the targets, minimizing civilian casualties, a faster response rate of countermeasures and counter operations. Integrated data processing centre along with decision command becomes key to take full benefits of the technology. The intelligence coming from the signal unit, visual intelligence systems, human intelligence, and geographical intelligence needs to merge for taking the better decision at the right time. This gave the edge to the Israeli military leaving other parties vulnerable. The countermeasures were swift with an element of surprise. US, China and France are already perfecting their integrated system for intelligence gathering from different stakeholders. As more and more countries jump on to this bandwagon of AI-dominated war (They do not have a choice), the world is going to be more challenging and the chances of starting a world war due to some ‘AI errors’ or ‘AI acting independently’ become real.
There are specific challenges to Indian Security scenario. China is supposed to have developed/tested miniature autonomous submarines. These submarines along with other supported autonomous swarms of drones have the capability to launch suicide missions (launch and forget). These kinds of aggressive tactics completely change the balance in terms of availability of sophisticated options for the adversary. India has recently woken up to these challenges and is now aggressively trying to match up. It will require consistent capability enhancement for Indian defence forces in the fast-evolving technology and security scenario. Recent attack on Jammu Airbase is just a pointer in that direction.
Views expressed above are the author’s own.
END OF ARTICLE