Killer robots: How far is too far with technological advancement and artificial intelligence?
Should the decision over life and death be left to a robot?
People are always intrigued by the possibilities of science and technology, what I would love to term the ‘coolness of science’. The single thought that machines can be programmed to do certain things on their own is simply mind blowing, for the creators, and the spectators, or better still, the consumers. This explains why movies like the Iron Man, Terminator, X-Men, and Transcendence are huge box office hits. Our love for such movies go beyond the impressive stunts and 3D effects; there’s a deeply rooted attraction to the idea of limitless possibilities in every man. This is why tech companies are always upgrading and launching new devices, and the same reason we look forward to having them.
We live in an age where we have become very dependent on ‘the tool auto-correct’ on our smart phones and computers. I remember being upset that my computer can only auto-correct simple words like ‘the’ or ‘that’ when typed wrongly. What about other complex words? Why do I have to make those corrections myself? Computers should be programmed to fix these things. Today, most electronic devices come with voice, finger, and optical recognition. And there’s more to come, thanks to technological advancement. But these are just electronic gadgets, and technology has taken more giant strides, like the creation and advancement of autonomous military robots.
Last month, at the opening of the International Joint Conference on Artificial Intelligence (IJCAI) 2015 conference on July 28, in Buenos Aires. The likes of Stephen Hawking, Steve Wozniak, and Elon Musk called for a worldwide ban on the development and launch of autonomous weapons. In an open letter signed by over 1000 artificial intelligence researchers and experts, it is argued that while robots can help in reducing soldier casualties during a battle, they can wreak havoc and cause greater loss of lives, if they are left to self-operate.
The debate on whether or not Lethal Autonomous Weapons System (LAWS), aka ‘killer robots’, should be permitted in some capacity or banned completely has been ongoing. In April, a second meeting on LAWS was held in Geneva, Switzerland, to debate the way forward. The first was in May 2014. There were generally two broad views on the matter:
- Lethal Autonomous Weapons should be placed in the same category as biological and chemical weapons, and should be pre-emptively banned.
- Lethal Autonomous Weapons should be placed in the same category as precision-guided weapons and should be regulated.
Experts against LAWS, argue that giving robots the ability to make life and death decision is fundamentally wrong, and must be stopped before they are proliferated. In his report on the Protection of Civilians in Armed Conflict issued in November 2013, UN Secretary-General, Ban Ki-moon, took note of LAWS saying that the ability of such systems to operate in accordance with international human rights law are being questioned.
“Is it morally acceptable to delegate decisions about the use of lethal force to such systems? If their use results in a war crime or serious human rights violation, who would be legally responsible? If responsibility cannot be determined as required by international law, is it legal or ethical to deploy such systems?” he asked.
Experts in support of LAWS, have argued that by taking soldiers out of the line of fire, autonomous weapons have the potential of reducing the number of casualties in wars. And that these robots would be better soldiers being that they are faster, more precise, and can accommodate more physical damages than humans. “They are potentially more accurate, more precise, completely focused on the structures of International Humanitarian Law (IHL), thus, in theory, are preferable even to human war fighters, who may panic, seek revenge or just plain stuff up,” Sean Welsh on The Conversation.
According to Professor Ron Arkin,the lethal autonomous robots, should be seen more as the next generation of smart bombs. Jai Galliot, a former anti-LAWS agent, now speaks in support; in his article ‘Why we should welcome killer robots, not ban them’ he described the open letter calling for a ban on artificially intelligent killer robots as “misguided and perhaps even reckless.”
However, with ongoing debates on this issue, the open letter asks a key question – should a global Artificial Intelligence (AI) race be started or prevented? In 2013, the New York based Human Rights Watch, warned that with the advancement in science and tech, countries like the Unites States, Britain, China, Israel and Russia will soon move towards the use of military robots. “If one or more country chooses to deploy fully autonomous weapons, others may feel compelled to abandon policies of restraint, leading to a robotic arms race.” If this happens, the existence of the human race is threatened.
Another reason that fuels the call for a ban of killer robots, as stated in the letter, is that the raw materials used to produce them are cheap and easy to get. They can easily be mass produced, found in the black market, and be exploited by terrorists and dictators.“Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.” Imagine autonomous military robots in the hands of ISIS or Boko Haram. Now, that is something to think about
The daily evolution of science and technology is one of mankind’s greatest blessing. Things never thought possible are made possible. But this same technology could be humanity’s greatest curse if it gets out of control and this is highly probable with the advancement and promotion of LAWS. Like the open letter states, AI and technology can make battlefields safer for humans in many other ways, without creating new tools for killing people.
The post Killer robots: How far is too far with technological advancement and artificial intelligence? appeared first on Ventures Africa.