Saturday, August 16, 2014

Designing a conscience for warrior robots

You wouldn't normally expect to come across a reference to deontic logic in a Bloomberg opinion piece but a recent article on the perceived dangers and possible downside of artificial intelligence cites a paper [PDF] which, drawing on formal logical and ethical theory, proposes a method for creating an 'artificial conscience' for a military-oriented robotic agent.

The paper, by Ronald C. Arkin, "provides representational and design recommendations for the implementation of an ethical control and reasoning system potentially suitable for constraining lethal actions in an autonomous robotic system so that they fall within the bounds prescribed by the Laws of War and Rules of Engagement." *

What interested me particularly was seeing basic logical and ethical theory being seriously discussed and applied in such a context.

Arkin sees virtue-based approaches as not being suitable for his purposes because they are heavily reliant on interpretation and on cultural factors and so are not amenable to formalization. Utilitarian approaches may be amenable to formalization but, because they are not geared to utilize the concept of human rights, do not easily accommodate the sorts of values and outcomes upon which the research is particularly focussed (e.g. protecting civilians or not using particular types of weapon).

So Arkin opts for a basically deontological approach, but a scaled-down version which does not purport to derive its rules or guidelines from first principles or from a universal principle like Kant's Categorical Imperative.

Arkin's recommended design would incorporate and implement sets of specific rules based on the just war tradition and various generally accepted legal and moral conventions and codes of behavior pertaining to warfare.

He points out that machines programmed on such a basis would be likely to be more reliably moral than human agents partly because they would be unemotional, lacking, for example, the strong sense of self-preservation which can sometimes trigger the use of disproportionate force in human agents.

The main problem as I see it is that, in general terms, the more morally constrained the robot is, the less effective it will be purely as a fighting machine and so there will be an ever-present temptation on the part of those who are deploying such machines to scale back – or entirely eliminate – the artificial conscience.

Although the need to maintain the support of a public very sensitive to moral issues relating to such matters as torture and the safety of non-combatants would lesson such temptations for the U.S. military and its traditional allies, it would be foolish to imagine that other players and forces less committed to applying ethical principles to the conduct of war would not get access to these technologies also.



* Arkin is based at Georgia Tech, and the research is funded by the U.S. Army Research Office.

No comments:

Post a Comment