Autonomous weapons at war

Chris Reed:
The dilemma posed by artificial intelligence-driven autonomous weapons — which some scientists liken to the “third revolution in warfare, after gunpowder and nuclear arms” — is that to take fullest advantage of such weapons, the logical move would be to leave humans entirely out of lethal decision-making, allowing for quicker responses to threats and in theory making us safer. But if future presidents and Pentagons trusted algorithms to make such decisions, conflicts between two nations relying on such technology could rapidly escalate — to possibly apocalyptic levels — without human involvement.

The U.S. military officially scoffs at this idea. In making life-and-death decisions, “there will always be a man in the loop,” Deputy Defense Secretary Work told the Times in October. But Work and others who try to offer a reassuring vision of a future in which radical new military technologies are deployed with care and caution may be swimming against the tide.
In a 2013 Wall-Street Journal op-ed, authors and teachers Robert H. Latiff and Patrick J. McCloskey warned of the dangers of thinking like this: “Full lethal autonomy is no mere next step in military strategy: It will be the crossing of a moral Rubicon. Ceding godlike powers to robots reduces human beings to things with no more intrinsic value than any object.”

But there is a pecuniary twist to this debate. In coming years, America’s military and political leaders won’t be considering whether to embrace autonomous defense and combat in a vacuum in which moral, ethical and philosophical concerns are carefully weighed. In the post-sequester era, military budgets have been cramped for years. Whatever President Trump’s short-term plans, this budget pressure is unlikely to recede in the medium- and long-term as the national debt grows and an aging population sends Social Security and Medicare costs soaring. Autonomous weapons are so relatively inexpensive that qualms about their riskiness could be swept aside — not just in Washington but in Beijing and Moscow as well.

This fear of a future in which such weapons are “cheap and ubiquitous” led more than 20,000 AI researchers, scientists and interested individuals — including Elon Musk, Stephen Hawking and Steve Wozniak — to sign a Future of Life Institute petition endorsing a ban on offensive autonomous weapons.

Will this have any effect on the ultimate decision-makers? That doesn’t appear to be the case so far.
There is much more.

Banning weapons has never been particularly effective.  Many of those weapons you see in the martial arts movies were developed because the Japanese banned people other than Samurai from owning swords.  The banning of battleships between WWI and WWII led to the development of the aircraft carrier.  Could you ever trust adversaries like Russia or North Korea to abide by any such ban?


Popular posts from this blog

Democrats worried about 2018 elections

Obama's hidden corruption that enriched his friends

The Christmas of the survivors of Trump's first year in office?