In a previous chapter there was mention of the moral dilemmas that certain technologies bring in their wake. For example, knowing that the machine is the best means available for the early diagnosis of breast cancer, it would be irresponsible indeed arguably immoral to not deploy these means where they can be afforded. This side of the moral equation belongs to the human and every major technology has brought with it comparable conundrums. Take these token technologies as examples of the human side of the moral equation: the nuclear bomb, genetically modified organisms, nuclear waste disposal. In these cases the objects themselves do not have a moral dimension.

Digital computers are a different kind of technology however, because they introduce a very primitive kind of moral agency. By being exceptionally good at executing statistical models, computers become agents that make decisions on the face of uncertainty. Machine morality is not determined by moral values, but by numeric values. It is bounded by the laws of statistical modelling and the possible coordinates of a decision exist within the bounds of the following matrix, commonly known as Confusion Matrix.






Numerous cities across Europe implement some kind of computer-based statistical model designed to identify number plates in cars. At the time of this writing the False Positive (FP) rate of these systems ranges between 8% to 15%. This means that about that many people get fines that shouldn’t be getting them. One could argue that the decision to deploy a system with this failure rate is ultimately human. At some point it was decided that the cost to deploy such a system would be less than the benefits of deploying it, and so the failure rate is assumed as a mere statistical fluke, to be compensated for by human labour in filing and processing claims that dispute machine-made decisions.

One could argue that the moral decision belongs to the system designer or the programmer, because in the face of uncertainty a default decision is given as output. When the maker fails to honour the principle of Occam’s Razor, that states that among competing hypotheses, the one that makes the fewest assumptions should be selected. A default decision is a fabricated assumption and so the designer would be in direct violation of Occam’s Razor. A lax moral standard in the maker gives rise to an incipient morality of the machine.

Machine morality can then be defined as the capacity of the machine to make the wrong decision when instead it could make no decision at all.

In the first kind of moral dilemmas, the kind in which a technology has no moral dimension in itself. Bruno Latour suggests that “to become moral and human once again, it seems we must always tear ourselves away from instrumentality, reaffirm the sovereignty of ends, rediscover Being; in short, we must bind back the hound of technology to its cage.”

Such rosy escape route is not afforded when moral decisions themselves are encoded and executed by digital computers, when the machine is both the means and its very existence an end in itself. The hound that bites human is human itself.
Appendix A

the morality of the machine

Confusion matrix