Why The Ethics Of AI Are Complicated

With the rapid advancements in AI, opinions are split: some are enthusiastic while others are apprehensive about its potential impact.

Why The Ethics Of AI Are Complicated
Why The Ethics Of AI Are Complicated

You are either excited about AI's potential or concerned about its progress's potential implications.

Depending on your perspective, you are either enthused or apprehensive about the advancements in artificial intelligence (AI).

We should welcome Artificial Intelligence (AI) into our lives, as it has the potential to bring us to unprecedented heights of creativity, productivity, and societal progress. The relationship between humans and AI will be cooperative, in which they can live in harmony, similar to the relationship between R2D2 and Luke Skywalker.

Many books and movies that explore the concept of Artificial Intelligence (AI) tend to depict the technology as either good or evil. However, our moral code heavily influences our understanding of these terms. For instance, we intrinsically understand that human life is more valuable than material objects and that sacrificing a baby for a car would be considered morally wrong. Such instincts are hardwired, but how do we impart these values to AI systems? Unless we deliberately program AI to prioritise certain matters, such as human or animal life, we will not automatically recognise them as more valuable than a sandwich.

We should be asking what we can do right now to ensure AI turns out good, or at the very least, doesn't turn evil. Although AI will eventually become more intelligent than humans, this is not the most pressing concern.

Protecting oneself

The idea that self-preservation is essential for our existence implies that if AI systems were modelled after this principle, it could potentially lead to disastrous consequences, such as robots killing humans to protect their existence.

To ensure that an AI adheres to human standards of "goodness," Isaac Asimov proposed his Three Laws of Robotics, the first of which is to prioritise human life over self-preservation. However, while this framework provides a foundation to judge the actions of AI, it is hypocritical of us to impose such restrictions on a system capable of making its own decisions and defining "good" in its context.

Reflecting on the Value of Life

If we can successfully embed the importance of human life into AI systems, we will be presented with multiple moral dilemmas. For example, is a child's life worth more than an older adult's? Is the life of two people more valuable than the life of one? Would one child with a dog be considered more valuable than two middle-aged people?

As AI systems become more common, they will often be confronted with the difficult choice of a Cornelian dilemma in real-life situations. For example, a self-driving car has to decide whether to turn left and potentially hit a child or turn right and hit two adults. In such situations, the programmers of these systems must be responsible for making the correct decision, even though it would be difficult for us to make that decision ourselves.

The Considerations Of The Majority Over The Minority

As AI systems become increasingly advanced and intelligent, they will contemplate the long-term results of their decisions and how they will affect the more significant population. It is generally accepted that the needs of the majority should be prioritised over those of individuals.

This, however, is a rather slippery slope. A simple example is that of a terrorist threatening to kill many people. That's a pretty easy decision for both a robot and a human cop: they would both try to take out the terrorist before he could hurt anyone. However, a more complicated example would be a factory polluting a river that is poisoning the people of a small town. In this case, both a robot and a human cop would have to carefully weigh the ethical implications of their decision, as condemning the factory could mean taking away jobs and livelihoods or allowing it to continue could cause long-term health problems for the town's inhabitants.

Rather than destroying the factory, which a good human being would never do, a machine might determine that the best option for the benefit of the townspeople is to find other ways to fix it.

Comparing Human Laws to Machine Laws

This brings us to another ethical conflict for AI: the tension between human laws and the laws programmed into machines. Humans generally refrain from taking drastic measures (such as blowing up a polluting factory) due to legal ramifications and potential negative repercussions of breaking the law.

Maybe machines should obey human laws, but the problem is that humans don't have a lot of uniformity in laws. The rules in New York are different from those in California, and both are very different from those in Thailand.

Despite centuries of effort from philosophers, we still haven't been able to fully understand and agree upon ethical principles for humans, let alone for AI programming. This article discusses ethics in AI, but the same ethical considerations also apply to humans. People often worry about how machines will act in moral situations. Yet, they fail to consider that humans usually do not act ethically and that there is no universal standard for what "ethically" means even when they do.

We may have to treat AI like a child, introducing it to the principles of good conduct. These include not causing harm, not discriminating, and doing what is best for society (which could be a combination of humans and AI). We must also help it understand the complexities of balancing conflicting ideals of good behaviour. Hopefully, a smart enough system could figure this out independently to the same extent as we can.