There’s a growing concern that terror organizations and rogue nations will get their hands on killer robots in the near future. A renowned security chief has issued this stern warning.
The vice-president of research at French defense giant Thales, Alvin Wilby, was speaking to a House of Lords Committee when he revealed that it will not be long before evil groups get their hands on lethal artificial intelligence (AI).
It turns out that autonomous weapons, which are essentially able to control themselves and do not need human interference to attack are already being developed. According to researchers, these weapons might get in the wrong hands thus causing extensive damage.
Mr. Wilby, who was speaking at the House of Lords inquiry this week, said the "genie is out of the bottle" with this sort of potentially deadly technology. He pointed out that there could be attacks carried out by “swarms” of small drones which require little human intervention.
He also told the Lords Artificial Intelligence committee: "The technological challenge of scaling it up to swarms and things like that doesn't need any inventive step.
"It's just a question of time and scale and I think that's an absolute certainty that we should worry about.”
The major threat posed by this evolving technology comes from not only these new-age weapons but other technology such as smart cars which could potentially be hacked and used to target pedestrians.
Mr. Wilby continued: “If someone's car is reprogrammed to kill pedestrians, it's become an autonomous weapons system. That's a credible terrorist threat.”
The Ministry of Defence official Mike Stone also said: “I think it's absolutely inevitable that this is going to get into the hands of non-state actors and certainly rogue states, North Korea and Iran top the list in most people's minds.”
A few days ago, Elon Musk who has previously warned of the dangers of AI but is developing driverless cars, revealed that the robots are more of a threat to world safety than North Korea. Musk tweeted on Friday: “If you're not concerned about AI safety, you should be. Vastly more risk than North Korea.”