InvestorsHub Logo
Followers 17
Posts 4452
Boards Moderated 1
Alias Born 10/16/2009

Re: chloebware post# 77101

Friday, 03/15/2013 3:26:53 PM

Friday, March 15, 2013 3:26:53 PM

Post# of 122340
There's very deep questions about Drone Warfare.
Here's an opinion from Major General (Ret) Latiff..........
Updated March 14, 2013, 7:15 p.m. ET

With Drone Warfare, America Approaches the Robo-Rubicon
By ROBERT H. LATIFF And PATRICK J. MCCLOSKEY

Recent reports on the Obama administration's use of military drones to fight terrorism sparked controversy about foreign policy and about international and constitutional law. Yet the development of drones is just one part of a revolution in war-fighting that deserves closer examination—and considerable soul-searching—about what it will mean for the moral and democratic foundations of Western nations.

Drones are unmanned aerial vehicles that, together with unmanned ground and underwater vehicles, constitute primitive precursors to emerging robotic armies. What now seems like the stuff of Hollywood fantasy is moving toward realization.

Over the next two to three decades, far more technologically sophisticated robots will be integrated into U.S. and European fighting forces. Given budget cuts, high-tech advances, and competition for air and technological superiority, the military will be pushed toward deploying large numbers of advanced weapons systems—as already outlined in the U.S. military's planning road map through 2036.

These machines will bring many benefits, greatly increasing battle reach and efficiency while eliminating the risk to human soldiers. If a drone gets shot down, there's no grieving family to console back home. Politicians will appreciate the waning of antiwar protests, too.

The problem is that robotic weapons eventually will make kill decisions on the battlefield with no more than a veneer of human control. Full lethal autonomy is no mere next step in military strategy: It will be the crossing of a moral Rubicon. Ceding godlike powers to robots reduces human beings to things with no more intrinsic value than any object.

When robots rule warfare, utterly without empathy or compassion, humans retain less intrinsic worth than a toaster—which at least can be used for spare parts. In civilized societies, even our enemies possess inherent worth and are considered persons, a recognition that forms the basis of the Geneva Conventions and rules of military engagement.

Lethal autonomy also has grave implications for democratic society. The rule of law and human rights depend on an institutional and cultural cherishing of every individual regardless of utilitarian benefit. The 20th century became a graveyard for nihilistic ideologies that treated citizens as human fuel and fodder.

The question now is whether the West risks, however inadvertently, going down the same path.

Unmanned weapons systems already enjoy some autonomy. Drones will soon navigate highly difficult aircraft-carrier takeoffs and landings. Meanwhile, technology is pushing the kill decision further away from human agency. Robotic systems can deliver death blows while operated by soldiers thousands of miles away. Such a system can also easily be programmed to fire "based solely on its own sensors," as stated in a 2011 U.K. defense report.

The kill decision is still subject to many layers of human command, and the U.S. Defense Department recently issued a directive stating that emerging autonomous weapons "shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force."

Yet this seems more like wishful thinking than realistic doctrine. Military budget cuts are making robotic autonomy almost fiscally inevitable. A recent study by the Reserve Forces Policy Board concluded that current military-personnel levels are unsustainable, consuming half the Defense Department budget. The Center for Strategic and Budgetary Assessments, in a study published in July, found that military-personnel costs will account for the entire defense budget by 2039, if costs continue growing at the current rate and defense spending increases only by inflation. Many robotic units cost one-tenth of what it takes to put a human soldier in the field.

In possible future military engagements with antagonists such as Iran, North Korea or China, the unfettered air superiority that the U.S. and its allies enjoyed in Iraq and Afghanistan will be challenged. It will be far more difficult for human operators to communicate reliably with remote unmanned weapons in war's chaos. The unmanned weapons will be impossible to protect unless they are made autonomous.

Recently the military verbiage has shifted from humans remaining "in the loop" regarding kill decisions, to "on the loop." The next technological steps will put soldiers "out of the loop," since the human mind cannot function rapidly enough to process the data streams that computers digest instantaneously to provide tactical recommendations and coordinate with related systems.

Fully autonomous weapons systems have already been deployed. Israel's Iron Dome antimissile system automatically shot down dozens of Hamas rockets in November. Iron Dome (and similar systems protecting U.S. Navy ships) would respond autonomously to inbound manned fighter jets and make the kill decision without human intervention.

Since these systems are defensive and must be autonomous to protect the innocent effectively, do they pose the same moral dilemma as offensive weapons? Should lethal autonomy be restricted to defensive weapons? At what point do defensive capabilities embolden offensive operations?

So far, debate about robotic autonomy has focused solely on compliance with international humanitarian law. In December, Human Rights Watch released a report calling for a pre-emptive ban on autonomous weapons, noting that "such revolutionary weapons" would "increase the risk of death or injury to civilians during armed conflict."

Michael N. Schmitt, chairman of the U.S. Naval War College's International Law Department, responded that war machines can protect civilians and property as well as humans. This assurance aside, it is far from clear whether robots can be programmed to distinguish between large children and small adults, and in general between combatants and civilians, especially in urban conflicts. Surely death by algorithm is the ultimate indignity.

Time is running out for military decision makers, politicians and the public to set parameters for research and deployment that could form the basis for national policy and international treaties. The alternative is to blindly accept as inevitable whatever technology offers. Let's not be robotic in our acquiescence.
...................................................
Maj. Gen. (Ret) Latiff, a consultant on national defense and intelligence technology, is an adjunct professor at the University of Notre Dame. Mr. McCloskey, the author of "The Street Stops Here: A Year at a Catholic High School in Harlem" (University of California, 2010), serves on the faculty at the School of Education at Loyola University Chicago.
http://online.wsj.com/article/SB20001424127887324128504578346333246145590.html
A version of this article appeared March 15, 2013, on page A13 in the U.S. edition of The Wall Street Journal, with the headline: With Drone Warfare, America Approaches the Robo-Rubicon.

“To be yourself in a world that is constantly trying to make
you something else is the greatest accomplishment.” ---Ralph
Waldo Emerson

Join InvestorsHub

Join the InvestorsHub Community

Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.