Human error can be fatal. What about AI error? Can AI‘s decisions have a slip between the cup and the lip as well? What factors go into making a sound and logical decision for an AI system?
Artificial Intelligence systems are expected to enable an additional output of $13 trillion to the world economy by the end of the next decade. This means the global GDP would grow by 1.2 percent every year.
With more choices, the human mind is capable of being more confused. Cognitive overload is not something the human mind is well at coping with. In a marketing study, the overall click-through rates drastically improved by limiting the number of options made available to the consumer.
Unlike humans, Artificial Intelligence systems do not think emotionally, although they are being trained to be emotional in the near future. For AI, having more choices only helps in its decision-making process. By help it does not necessarily mean positive. In 2016, Microsoft was forced to take down its AI chatbot, Tay.ai, in less than a day, when it turned into a racist-loving and feminist-bashing tweeting machine. Tay.ai was designed to learn extensively from its users and the users taught it to be what it was not to be.
As the human decision is a result from the constant struggle among logic, experience, and intuition, the AI decision is not so easy to figure out. AI systems are built to be arbitrary. Even the makers and developers of an AI system are unable to tell why it chose a certain X over a Y. Big companies do not want to take the risk of implementing an AI system that just churned out a result, they want to understand the factors that it considered before it arrived at a conclusion.
IBM, one of the global leaders in making AI solutions, highlight that the quintessential objectives of Artificial Intelligence is not to outdo the human in taking decisions, rather to assess and predict return on investments, foreseeable opportunities in the market, statistics on diseases cured, etc.
How safe or ethical Artificial Intelligence systems can be, has been in debate since its genesis. If an AI in a driverless car has to make a decision in a split second to avoid a major accident, would the AI consider saving human life by putting its own system on the line? Only time will tell.