Can AI Have Morals?

In today’s world, artificial intelligence continues to expand in its real world application.  As the use of this technology grows, however, it raises questions and concerns about the ethical situations that artificial intelligence may start addressing.  In the last couple of weeks, the concept of technology analyzing these ethical situations has been suggested as a possibility.  In this case, artificial intelligence could possibly be able to identify unethical behavior by extracting evidence of immoral actions in the business setting.  Although research does not suggest that the technology could do this on its own at this time, it is still showing how artificial intelligence could utilize a set of morals to determine if something is ethical or not.

Going beyond just a business setting, is it possible for artificial intelligence to have a set of morals to make ethical decisions?  To understand this, it may be important to define what a moral is. A moral is a principle that tells what is right from wrong based on previous experiences, knowledge, and a set of beliefs.  These are things that could be programmed where previous experience could be given through machine learning, knowledge could be provided in data, and a set of beliefs could be based on a set of true and falses or a range of integers.  This may not be an exact implementation to create an artificial intelligence that has morals, but this is at least an idea for how it could work. The better question is, can the technology distinguish the difference between right and wrong, and can it also be discerning in more “gray areas” where there may not be an exact answer?


When it comes to creating this type of technology, there are considerations that must be taken into account as possible ethical risks. A couple of these risks include biases and understanding how artificial intelligence makes its decisions. Biases can come from the data sets that are used by the technology to determine their actions.  The data itself could be biased or the algorithm given to artificial intelligence may be biased. Utilizing information for different purposes other than what they were intended for can additionally cause ethical issues. It is also important that people can understand why the artificial intelligence source made a decision so that they can agree or disagree with the choice made.  Even when we make decisions, we have to explain ourselves so that people can see why we did what we did. The intentions of a choice is also important to understand because sometimes the motive behind a decision does not make sense. With all these possible conflicts, will artificial intelligence really be able to get past these obstacles?



The first thing that must be done in order to avoid these risks is to make sure that artificial intelligence receives numerous data sets so that it can identify bias.  Another important feature is that the rules that govern its decisions need to be simple and logical. The issue with these expectations is that we are assuming that artificial intelligence can learn and understand from the same methods we use to create our own personal moral compasses.  So in order to create a moral compass that artificial intelligence can use, we need to provide the experiences, knowledge, and set of beliefs that technology can utilize to build its own morals.  As artificial intelligence builds its system of morals, we, as humans, need to provide feedback so it can modify itself through the process if we want it to be more human-like in its decisions.

Comments

  1. I'm curious about how this would work because in more complex moral situations people may have different opinions about what is ethical. The cartoon seems to be in reference to the trolley car problem, a famous ethical question: a trolley is on a track where it is going to hit five people. There is a lever that will switch the trolley to a different track but on that track is one different person who would get hit. Do you pull the lever to save the five people, thus sacrificing the person who would not have been in danger otherwise? Most people say that they would but that is a just a theoretical scenario. If it were a real situation, people would have a much more difficult time deciding what to do. If a self-driving car had to choose between hitting two people who were jaywalking or swerving to hit one person on the sidewalk, is it better to prioritize two lives over one, or to protect the safety of the person who is not actively putting him- or herself in danger? Morality can be complicated so it seems like it might be hard to come up with rules that are simple and logical since a lot of humans may have trouble agreeing on those rules.

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete
    2. That is a valid point. There is also the emotional aspect of it which comes into play: one person who is intimate to you versus five or even more people whom you have never met and might probably never see again - for most, the choice is simple. Morality may be complicated, but sometimes, the simplicity of it is what makes it so difficult to replicate in A.I.

      Delete
  2. Moral is a complex and vague thing, even human can not define or understand it perfectly, so it could be hard to write moral in code. However, use big data set and provide feedback to a learning AI is necessary. Even though we might not be able to perfectly teach AI what is moral, we can give the the laws and make sure they make the right choice. The process we trying to moralize AI can also help us clarify what moral is and make replenish to our laws for limiting human.

    ReplyDelete
  3. This is an good question, because replicating morality is an important step for AI. Morality and consciousness are tightly linked and I can't imagine one without the other. Because of this, I recognize that morality will be a hard quality to replicate, particularly when it comes to capturing its nuance. But I wonder if that's a human quality we might want to completely replicate. One could argue ... not sure I would ... that some decisions could benefit from a clearer cut rule set.

    ReplyDelete
    Replies
    1. This is an important aspect to the debate. If we do get to a point where robots become a staple in the workforce, who defines the morals set in place for these robots? Is it going to be a task force defining morals? The individual developer of the robot? The companies who utilize them?

      Delete
    2. That does make you scratch your head - people's morals are not dictated by people. Rather, they are influenced by them, as well as upbringing, experience, and a number of other internal and external factors. Even though A.I. is capable of learning, it is not at the emotional capability of humans to decide for itself what its morality is and that it should not be forced upon others. The people who define the morals are essentially doing something that isn't right, but it is also something that the A.I. needs. I think this demonstrates the irony and contradiction that goes on in the field of A.I. and ethics.

      Delete
  4. Moral is really and varies in different culture. There is no set of standard moral, so we cannot translate moral into code. I think the only way to AI have moral is let them learn by machine learning. Although it is not perfect, I think it still gives AI some of the most basic moral code.

    ReplyDelete

Post a Comment

Popular posts from this blog

Self-Driving Cars: Rewards and Risks

Can AI outperform doctors?

Benefits of a Humble Intelligence