Exploring the Ethical Implications of AI: A Closer Look at the Challenges Ahead

12 months ago 44

This article discusses five common ethical issues that arise when AI is not implemented or released in the most responsible way.

AI ethics is about releasing and implementing AI responsibly, paying attention to several considerations, from data etiquette to tool development risks, as discussed in a previous article. In this article, we’ll explore some of the ethical issues that arise with AI systems, particularly machine learning systems, when we overlook the ethical considerations of AI, often unintentionally.

The 5 Common AI Ethical Issues

1. Bias propagation

Although there’s a strong belief that algorithms are less biased than humans, AI systems are known to propagate our conscious and unconscious biases. 

For example, there are known recruiting tools that algorithmically “learned” to dismiss women candidates as they learned that men were preferred in the tech workforce.

Even facial recognition systems are infamous for disproportionately making mistakes on minority groups and people of color. For example, when the researcher, Joy Buolamwini, looked into the accuracy of facial recognition systems from various companies, she found that the error rate for lighter-skinned males was no higher than 1%. However, for darker-skinned females, the mistakes were much more significant, reaching up to 35%. Even the most renowned AI systems have been unable to accurately identify female celebrities of color.

So, what’s the primary cause of AI bias?

Data. AI systems today are only as good as the data they are trained on; if the data is nonrepresentative, skewed towards a particular group, or somehow imbalanced, the AI system will learn this nonrepresentation and propagate biases. 

Bias in data can be caused by a range of factors. For example, if historically, certain groups of people have been discriminated against, this discrimination will be very well recorded in the data.

Another reason for bias in data can be a company’s data warehousing processes or lack thereof, causing AI systems to learn from skewed samples of data instead of representative ones. Even using a snapshot of the Web to train models can mean you’ve learned the biases in that snapshot. This is why large language models (LLMs) are not free from biases when they’re quizzed on subjective topics.

Bias in data can also be a development mistake where the data used for model development was not sampled correctly, resulting in an imbalance of subgroup samples. 

Bottom line: When there’s limited oversight of the quality of data used for model training, various unintended biases are bound to happen. We may not know when and where especially with unconstrained multi-taskers like LLMs.

2. Unintended Plagiarism

Generative AI tools such as GPT-3 and ChatGPT learn from massive amounts of Web data. These tools generate the probability of producing meaningful content. In doing that, these generative AI tools may repeat content on the Web word-for-word without any attribution. 

How would we know that the generated content is, in fact, unique? What if the uniquely generated text is identical to a source on the Web? Can the source claim plagiarism?

We’re already seeing this issue in artwork generators that learn from a large number of art pieces belonging to different artists. The AI tool may end up generating art that combines work from multiple artists.

In the end, who exactly owns the copyright to the generated art? If the artwork is too similar to existing ones, this can lead to copyright infringement.

Bottom line: Leveraging Web and public datasets for developing models can result in unintended plagiarism. However, due to little AI regulation worldwide, we currently lack enforceable solutions.

3. Technology Misuse

A while ago, a Ukrainian state leader was portrayed as saying something they did not actually say, using a tool called deepfakes. This AI tool can generate videos or images of people saying things that they never actually said. Similarly, AI image generator tools like DALL.E and Stable Diffusion can be used to create incredibly realistic depictions of events that never occurred.

Intelligent tools like these can be used as weapons in a war (as we’ve already seen), to spread misinformation to gain political advantage, manipulate public opinion, commit fraud, and more. 

In all of these, AI is NOT the bad actor, it’s doing what it’s designed to do. The bad actors are the humans who misuse AI for their own advantage. Furthermore, the companies or teams that create and distribute these AI tools have not taken into account the wider effects these tools may have on society, which is also an issue. 

Bottom line: While the misuse of technology is not exclusive to AI, because AI tools are so adept at replicating human abilities, it is possible that the abuse of AI could go undetected and have a lasting effect on our view of the world.

4. Uneven Playing Fields

Algorithms can be easily tricked, and the same is true of AI-powered software, where you can trick the underlying algorithms to gain an unfair advantage.

In a LinkedIn post that I put out, I discussed how people might trick AI hiring tools when you disclose the attributes the system will use in the decision-making process.

While enforcing steps to reveal an AI’s decision-making process in hiring is a well-intentioned step toward promoting transparency, it may enable people to game the system. For example, candidates may learn that certain keywords are preferred in the hiring process and stuff their resumes with such keywords, unfairly getting ranked higher than more qualified candidates. 

We see this on a much bigger scale with the SEO industry, estimated to be worth over 60 billion dollars. Getting ranked highly in Google’s eyes these days is not just a function of having meaningful content worth reading. But also a function of having done “good SEO” and thus, the growing popularity of this industry.

SEO services have enabled organizations with hefty budgets to dominate the ranks as they’re able to invest heavily in creating massive amounts of content, performing keyword optimization, and getting links placed broadly around the Web.

While some SEO practices are mere content optimization, some “trick” the search algorithms into believing that their websites are the best in class, the most authoritative, and will provide the best value to readers. This may or may not be true. The highly ranked companies may have just invested in more SEO.

Bottom line: Gaming AI algorithms is one of the easiest ways to gain an unfair advantage in business, career, influencer-ship, and politics. People who figure out how your algorithm “operates” and makes decisions can abuse and game the system.

5. Widespread Misinformation

As we rely more and more on answers and content generated by generative AI systems, the “facts” that these systems produce can be assumed to be the ultimate truth. For example, in Google’s demo of their generative AI system, Bard, it provides three points in response to the question, “What new discoveries from the James Webb Space Telescope can I tell my 9-year-old about?” One of the points states that the telescope “took the very first pictures of a planet outside of our own solar system.” However, astronomers later pointed out in a very public way that this wasn’t the case. Directly using output from such systems can result in widespread misinformation. 

Unfortunately, without proper citation, it isn’t easy to verify facts and decide which answers to trust and which not to. And as more people accept the content generated without question, this can lead to the spread of false information on a much larger scale than seen with traditional search engines. 

The same is true for content ghostwritten by generative AI systems. Previously, human ghostwriters had to research information from trustworthy sources, piece them together in a meaningful way, and cite the sources before they publish. But now, they can have entire articles ghostwritten for them by an AI system. Unfortunately, if an article generated by an AI system is published without further verification of the facts, misinformation is bound to spread. 

Bottom line: Over-reliance on AI-generated content without the human verification element of the facts will have a lasting impact on our worldviews due to the non-fact-checked information we consume over extended periods of time.

Summary

In this article, we explored some potential ethical issues that can arise from AI systems, particularly machine learning systems. We discussed how:

AI systems can propagate racial, gender, age, and socioeconomic biases AI can infringe on copyright laws AI can be used in nonethical ways to harm others AI can be tricked, unleveling the playing field for people and businesses Trusting answers blindly from AI systems can cause widespread misinformation

It’s critical to note that many of these problems were not intentionally created, but rather they are the side effects of how these systems were developed, disseminated, and used in practice.

Although we can’t eliminate these ethical problems entirely, we can certainly take steps in the right direction to minimize the issues created by technology in general, and in this case, AI.

With insights into the ethical dilemmas of AI, let’s focus on devising strategies for more responsible development and dissemination of AI systems. Instead of waiting for government regulation, in an upcoming article, we’ll explore how businesses can lead the way in doing AI responsibly. 

Keep Learning & Succeed With AI

Join my AI Integrated newsletterwhich clears the AI confusion and teaches you how to successfully integrate AI to achieve profitability and growth in your business. Read  The Business Case for AI to learn applications, strategies, and best practices to be successful with AI (select companies using the book: government agencies, automakers like Mercedes Benz, beverage makers, and e-commerce companies such as Flipkart). Work directly with me to improve AI understanding in your organization, accelerate AI strategy development and get meaningful outcomes from every AI initiative.


View Entire Post

Read Entire Article