Tuesday, October 15, 2019

The moral agent/being/entity

Whereas Turing asked "Can a computer think?" and Searle changed this to "Can a machine have a mind?" another question that people raise about AIs is whether they can have a sense of right or wrong, whether they can be regarded as moral agents.

A simple definition of morality, like conscience, might hinge on having a sense of right or wrong. But this begs the question of whether everyone has the same sense of right or wrong? Irrespective of how that is answered, there is still the question of where morality comes from...

Some define moral agents as having a responsibility to not to cause unjustified harm - but this gets us to the area of ethical dilemma where every action I take does somebody harm. It also raises the additional question of what is meant by justified or unjustified - I am very good at justifying my actions. Harm is often these days expressed in terms of violation of basic rights - but these are also malleable. I've recently seen both internet access and mobile phone coverage cited as a basic human right, for example.

The key question, however, may be who can be held to be responsible for their actions? Animals? Adults? Children? Companies? Computers?

And then at what point in their development does an entity become a moral agent who is held accountable for their actions? And until then who is responsible? Their creator/parents/programmers/directors? And what does this then say about the morality of limited liability companies?


Two definitions of morality


Basically it seems that our concept of morality has two separate aspects, or even two alterate definitions. I will paint a dichotomy of intrinsic versus extrinsic, while others would make a divide into descriptive and normative.

The intrinsic form corresponds roughly to conscience, and refers to a code of conduct that would be accepted as normative by all rational persons. This raises the question of what is meant by rationality (arguably computers think more rationally than people, and perhaps we think adults are more rational than babies - so this means we have degrees of rationality). A second question revolves around the cultural, social, religious and political background in which the individual has been raised and educated, even brain-washed. Is it really intrinsic when it is influenced by such external sources? How can we tease out the intrinsic morality when we are all inculcated with with behavioural codes from the time we are born?

The extrinsic form acknowledges that morality is externally imposed, which may be described explicitly in the form of the laws of a state or the rules of etiquette or the doctrine of a religion or the platform of a party or the agenda of a lobby group. To the extent that all of these are a function of people's beliefs there is no real difference except for the rigour with which the code of conduct is enforced. Break the law and you can be jailed or executed, offend against a dominant etiquette or minority political correctness and you can expect social ostracism, break ranks with the party and you can be thrown out, maintain solidarity and create alliances and maybe you can force your beliefs into law.

Much of modern morality and democratic law is shaped by these alliances - minority groups who believe strongly in one tenet and don't have an opinion about others questions can combine together with others – supporting platforms that are not incompatible and that they don't particularly care about one way or the other. Thus minority lobby groups can gain the power they need to change both formal law and social norms. The apathetic majority will go along with most things, but the individual and apolitical groupings have no power unless they betray their own moral integrity and support aims of others for the sake of getting reciprocal support for their own agenda, defining a new extrinsic moral substrate for their society. Of course loyalty becomes paramount in maintaining the bargain inherent in any such alliance.

Note that these extrinsic ideas of morality all hinge on subjective belief rather than objective truth. They thus come under a broad definition of faith, whether it is faith in democracy, belief in a particular way of life, adherence to a particular scientific standpoint, or trust in an all-powerful God.

The Christian Bible, perhaps surprisingly, supports both the intrinsic and the extrinsic idea of morality. Paul in Acts 17 and Romans 1 regards knowledge ideas about both God and what is right as being obvious, with ignorance of the law being an excuse that is no longer tolerated, with recognition of both the true God and true morality as being corrupted as we rejected God and turned to worship ourselves and our own creations.


Five flavours of morality


Hadit's Moral Foundations Theory actually distinguishes five flavours of morality relating to care/vulnerability, fairness/exploitation, loyalty/treason, authority/hierarchy and sanctity/threat. How much each of these is important is very much a matter of politics, with different shades tending to emphasizes different flavours of morality.

Of course, for such religiously political groups, fairness tends to be defined in terms of authority and loyalty, and answers to the questions of what is fair and who is worthy of care are indeed the defining features of the different political groups and lobbyist agendas. It is certainly the basis for the changing nature of ideas of basic human rights.

The Christian Morality impinges directly on this. Jesus' story of the good Samaritan in Luke 10 was offered as an answer to this question of who is worthy of care. Jesus' story of the sheep and the goats in Matthew 25 paints judgement as hinging on giving care to those who need it. Most of our current law and morality, as well as scientific and technological progress, derives from Jesus' message of love. It was Christians who fought to abolish slavery, who founded orphanages, hospitals and universities. Of course not everyone claiming the name of Christ acts in accord with these principles.

Sin in the Bible is intrinsically rejecting the authority of God, while love and loyalty to fellow Christians/Jews is also stressed - but the good neighbour/good Samaritan and woman at the well stories specifically extend beyond this to the Samaritans, shunned by the Jews. Paul regarded himself as being specifically an apostle to the Gentiles, the non-Jews. Fairness was also very much in scope both in the Jewish Law, the first five books of the Christian Bible, as well as the Prophets and Jesus' parables and Paul's: the rich as exploiters of the poor is a theme throughout the Bible.

If an intrinsic view of morality has elements of socialization and indoctrination that provide at least some extrinsic component to morality, where does this external concept of morality come from? The biblical view is that it comes from our creator, and is thus also evident in the world that he created and the consciences he gave us.

As "enlightened" humans we prefer to believe in "Science" and "Evolution" rather than "God" and "Sin". But Romans 1 describes such people as "futile in their thinking, their senseless minds darkened".

These science and religion elements naturally clash in my stories, but for now my purpose is to inform a discussion of Artificial Intelligence.

So what about AIs? Does their morality come from their creator?


The morality of Artificial Intelligence


We now look at two ways in which morality relates to AI.

The first is the one we have just reached. What does it mean for an AI to be moral? Is being a moral agent something we can expect of an AI? Does being a moral agent require being able to think? Does being conscious require having a mind and a conscience?

For now let us assume that it is (or will be) possible to build (or evolve) a sentient rational self-determining agent with the ability to look into themselves as well as into the world, with an ability to make judgements based both on the basis of explicit rules and laws as well as on the basis of general principles of avoiding harm - this was indeed the basis of Asimov's three laws of robotics. But let us define three somewhat more fundamental laws:


  • If the AI is not sentient, then it can't make decisions about a world it cannot sense.
  • If the AI is not rational, then it can't reason about law or harm in a specific situation.
  • If the AI is not self-determining, then it can't be responsible for what it does.
In all of these cases, the responsibility and accountability seems to rest with the programmers and engineers who create the AI and determine how it should be used: it is just a tool, and potentially a weapon, like any other.

One of the characteristics of real AI is learning – we learn about the physical and social world we are situated in. Thus AI's behaviour is not determined just by their initial programming, but on the basis of learning by experience. If they are trusted to interact with the world, gain this experience, reason for themselves, and make their own decisions – then what emerges will be a form of intelligence beyond the programmed weak AI (GOFAI or Good Old Fashioned Artificial Intelligence). Their sentient sense or feel of the world and themselves also means that they have feelings.

Of course, we probably will want to let them grow bit by bit, trusting them more and more as the show themselves worthy of that trust.

This is what we do every day with our children, as they develop their social, linguistic, reasoning and ethical capacities.

Initially we provide constraints for our children, we also provide the extrinsic code of conduct - and punish them when they deviate from it. We do something similar when training a dog...

Is this something we should do with AIs?

This leads me to the second way in which morality relates to AIs. Is it moral for us to create and control AIs? Is it moral for us to destroy them or to wipe their memories?

In fact, if you regard yourself as essentially a machine, a computer, then EU laws mean that on leaving the EU, or on request by me, or after a predetermined period of time, you must wipe all trace of me from your memory. However, the distributed nature of memory in neural networks, biological and artificial, means that every person you have experienced affects your very understanding of what it means to be that type of person or a member of their profession, of what it means to do the various things they did, to carry out your profession.

The same applies to an AI – so this kind of law will hold back the development of true AI which depends on this building up of their intelligence based on the entirety of their experience.

I think this kind of data privacy law is immoral!

Who made privacy a basic human right anyway?

Incidentally, by definition, a machine is an artefact created to perform a specific task, and a computer is a machine created to perform computational tasks. Now do you think you are a machine or a computer? 


Or are we all just a biological accident of evolution?

The morality of Evolution


Darwin is known for the law of "survival of the fittest" which remain at the core of modern evolutionary theory, as well as at the core of the computational intelligence approaches to evolutionary optimization, genetic programming, etc.

So if we are just a biological accident of evolution, how do we relate our ideas of morality to the law of the jungle, the law of "survival of the fittest"?

There are many theories to reconcile these contradictory ideas.

Richard Dawkins Selfish Gene relates to the idea of preserving our genes (and cultural memes) - first priority all our genes (ourselves) and the half that is in our zygotes (our children) as well as the smaller fractions that are in other family members, other members of our tribe, and other members of our race.

So why champion unrelated people, let alone other races, at the expense of our own family and culture?

A more traditional argument is Universal Hedonism. Hedonism is about seeking our own pleasure and wellbeing (and our qualia equate that with pleasure versus pain). Civilization is about people coming together into bigger and bigger communities (civil is the adjective for things relating to cities). Instead of doing everything ourselves for just ourselves (and our families) we provide our needs as a group, and this leads to efficiencies of scales, allows the development of experts with better skills than a "jack of all trades master of none", and requires the development of trade, leading eventually to a medium of exchange (money, credit).

Money is something that you actually endow with an agreed (although ideally appreciating) value. Credit is actually when you trust someone to repay you later – and modern fiat currency is actually not really money but credit: trust in a government both to do the right thing in terms of using the credit, and to survive long enough to actually do so (ideally at equivalent or better value).

Under Universal Hedonism, what we are trying to maximize the wellbeing of is not just ourselves, but our interdependent society. Morality in this view, like credit, is essentially a form of trust.

The question of AI's moral agency and rights, and how we should interact with them and treat them generally, is obviously going to come up in the Paradisi Chronicles – in particular, in Casindra Lost where SS Casindra's crew consists of a solitary human captain, an emergent AI, and a growing number of critters of various sorts.



My Paradisi Lost stories

My Casindra Lost stories feature an emergent AI 'Al' and a captain who is reluctantly crewed with him on a rather long journey to another galaxy - just the two of them, and some cats... There's another one, 'Alice' that emerges more gradually in the Moraturi arc.

Casindra Lost
Kindle ebook (mobi) edition ASIN: B07ZB3VCW9 — tiny.cc/AmazonCL
Kindle paperback edition ISBN-13: 978-1696380911 justified Iowan OS
Kindle enlarged print edn ISBN-13: 978-1708810108 justified Times NR 16
Kindle large print edition ISBN-13: 978-1708299453 ragged Trebuchet 18

Moraturi Lost
Kindle ebook (mobi) edition ASIN: B0834Z8PP8 – tiny.cc/AmazonML
Kindle paperback edition ISBN-13: 978-1679850080 justified Iowan OS 

Moraturi Ring
Kindle ebook (mobi) edition ASIN: B087PJY7G3 – tiny.cc/AmazonMR
Kindle paperback edition ISBN-13: 979-8640426106 justified Iowan OS 

Author/Series pages and Awards

No comments:

Post a Comment