Risks of Artificial Intelligence

Nothing good comes without repercussions and artificial intelligence is not an exception. Despite the many benefits that are set to be accrued from AI, we must brace ourselves for some risks. This chapter takes you through three major risks associated with artificial intelligence, namely: cognitive biases, ethical privacy issues and digital divide.

Cognitive Biases

Cognitive biases refers to a mistake in various cognitive processes like remembering, evaluating and reasoning. From the human perspective, it occurs as a result of holding onto one’s beliefs and preferences despite contrary information. With regards to machine, it happens when the computer system reflects manipulated virtues. This is one of the biggest risks in AI because artificial intelligence is not only learning our biases but is also amplifying them (Jarno, 2017).

Whenever AI is subjected to cognitive bias, it implies that there will be a series of inaccuracies throughout whatever purpose the AI is meant to fulfill. The good thing about this risk is that the bias tends to be in the same direction, therefore fixing it would be possible when spotted.

A common misconception within the field of artificial intelligence is that a bigger dataset is always better. That is untrue, especially when the dataset you are provided with is biased. Even though the addition of data normally lead to an increase in accuracy, the accuracy will not be possible if the data being added is biased. If you feed the AI with garbage, its output will definitely be garbage.

If the AI is learning from data that has been polluted with human bias, then of course the machine will learn it but will do more than just that – it will amplify it. This is a major problem, especially if it is assumed that the AI is impartial and that it cannot hold the same bias as humans do.

Take a look at a situation where a marketing strategist want to carry out marketing campaigns spearheaded by university students. The strategist then announces this opportunity and different applicants send in their details. To select the most appropriate candidates, AI is used to cross-check their qualifications. Expectedly, the AI should make comparisons of all candidates and spit out the ones with superior qualifications. However, that wouldn’t be the case if the program is supplied with biased data that favors candidates from a particular university, say University of Toronto. The AI will without doubt recommend majorly students from University of Toronto because that is what it was fed with. Basically, that is the principle of “garbage in, garbage out.”

Major tech giants have joined the field of AI, making it feel like they are in an arms race. This places a lot of pressure on the researchers to take the lead in publishing and the companies to quickly release the product to the market before their competitor does. This means that minimal time is dedicated towards analyzing cognitive biasness. That, combined with previous dataset inaccuracies, has implied that the cognitive biasness finds itself to production (Jarno, 2017).

But the truth of the matter is that machines do not actually have bias. Artificial intelligence does not have the power to make something to be true or false. The existence of this bias in machine learning can be traced back to the creation of an algorithm and interpretation of data. Therefore, avoiding the risk lies in making optimizations to the AI algorithm and constructing datasets that do not have bias. Imagine the consequences that would occur if you created a self-driven cars that cannot ‘see’ people of other races other than white. It is vital that the training set be made diverse enough so that the AI does not just go by the few things it knows. Facial recognition technology is a good example here. If the training set for the facial recognition AI is customized without including people of color, it would be tough to recognize other faces apart from the established norm.

Training sets do not just materialize out of nowhere. They are created and that means there is a chance to create full-spectrum training sets that captures the diversity of humanity.

Ethics & Privacy Issues

There is no way we can deny the fact that intelligent systems have transformed our lives especially with regards to factors such as providing translations, conducting research, composing art, detecting fraud, among other things. But as these systems do that to increase the efficiency of our world, ethical and privacy issues are raised (Beall, 2018).

Issues of inequality are raised. The AI machines create wealth, but how should this wealth be distributed? The economic system that we operate in runs on the basis of compensation for the contribution that has been made. Most firms operate the hourly work model but with the introduction of AI, human labor is drastically reduced hence revenue goes to fewer people. In the long run, we have a situation where those who have ownership in AI-powered firms make all the money. Already there is a widening wage gap where a bigger portion of the money made goes to a handful few. If we are headed towards a post-work society, how should the post-labor economy be structured?

The issue of racist robots is slowly becoming a concern. Is this an acceptable bias in AI? Definitely not although it exists. As we have already seen in section 4.1, biasness happens as a result of biased data being fed to the machine and unfavorable manipulations being performed on the algorithm. Google’s Photos Services has been at the forefront of championing for AI but things can go wrong like when software meant to predict future criminals appeared biased people of color. The fact that AI is a product of people who have the ability to be judgmental and biased means that they can transmit such ethical challenges to the AI (Beall, 2018).

Of particular concern is human security when it comes to AI. As the technology gains more power, it opens opportunities for it to be used not only for good reasons but also for nefarious ones. There may be robots which are developed to replace human soldiers and autonomous weapons as well as AI systems that can be used in a malicious manner. The issue of cyber-security takes the center stage here. Individual’s personal data including their medical records may be stolen and used manipulatively.

The impact of AI on our humanity has always been under scrutiny. The extent to which these machines impact our behavior and interactions is unclear. As the machines are improved further, they start to take the actual form of humans, just as a bot referred to as Eugene Goostman was able to do during the 2015 Turing Challenge. This bot was able to chat with numerous human raters at a go who believed that they had been chatting with a human. This is just the start of an era where most of our interactions will be with the machine and less with real world people. Whereas humans have a limitation when it comes to the kindness and attention that they extend to other people, the AI machines can deploy unlimited resources when building relationships. That means we are more likely to get lost into them and even forget about our family members.

Furthermore, we cannot leave out the idea of artificial stupidity. The Artificial intelligence is as a result of learning. Even for humans, our intelligence is normally as a result of learning. Before an AI system can be deployed for a task, it is taken through the training phase where it is taught (learn) the right patterns and how to act when they receive certain input. After the learning phase, it goes to the test stage where more examples are introduced to check its performance. It is obvious that not all examples can be covered in the training and testing phases. When the system gets to the real world, it could be fooled in ways that humans are not able. For instance, it may be made to “see” non-existent things, providing the wrong feedback.

Digital Divide

There is always uncertainty as to whether AI leads to widening or weakening of the global digital divide. During the 1990s, the digital divide was solely based on disparity between individuals who has access to the Internet and those who did not. As digital divide continued to take shape, we got to face an increase in anxieties, frictions and inequality among people. Thus, policy makers such as the United Nations developed strategies that would mitigate the divide. These included expanding broadband access in rural settings.

Many of these policies had begun working until new and emerging technologies such as AI once again threatened to increase the gap. As AI is improved on a daily basis, it is slowly shaping up to be a major differentiator. It will disrupt industries and economies all over the globe, and even though this may be a good thing in some instances, creating digital divide would not be one of the benefits.

Researchers believe that 70 percent of the economic benefits accruing from AI will be confined within North America and China (McSherry, 2018). Developing countries on the other hand would be left with less than 6% (McSherry, 2018). That is to say that the digital divide emanating from AI would far mush worse than one caused by the Internet (Insidebigdata, 2018).

In addition, there would be a significant shortage when it comes to the global AI talent. With most of the talent concentrating in Global North due to the lure of higher pay, Global South would be left with a significant shortage that they would be unable to implement AI effectively. These regions would find it hard to find and keep data scientists that are required for creating and applying AI.

Even though the technology proves to be a useful, strategies ought to be put in place as early as now so that the risk of digital divide which creates the “haves” and the “have-nots” is mitigated. Emerging markets could serve as a better opportunity for developing nations to have a better chance for implementing AI. It has happened in countries such as Nigeria, South Africa, and Uganda where AI has been used in improving diagnoses and treating patients located in remote areas.

Leave a Reply

Your email address will not be published. Required fields are marked *