What is AI Ethics?

11 months ago 45

This article is part of a series that will explore what AI ethics means, its implications to society, and how businesses can start leading the way by doing AI responsibly while also reaping its benefits. This article focuses on what...

As AI continues to become more prevalent in our lives, it is crucial to consider the ethical implications of its use. Although AI can augment and revolutionize how we live, work, and interact with each other, it can also cause harm if not used or developed correctly.

People can be wrongly imprisoned when facial recognition systems fail in law enforcement and the judicial system. People can be killed if self-driving cars fail to correctly see them as pedestrians on the road. Things can go awfully wrong if we fail to think about the implications of how we use and develop these AI-powered tools. 

This article is part of a series that will explore what AI ethics means, its implications to society, and how businesses can start leading the way by doing AI responsibly while also reaping its benefits. 

In this article, we’ll focus on what ethics means in the context of AI.

What is AI Ethics?

Ethics in the context of AI is all about doing AI responsibly, keeping questions as outlined below in consideration:

DATA ETIQUETTE

AI systems today are data-dependent; that’s how machine-learning-driven AI systems learn. The underlying data that you use can significantly impact how the model behaves. The question is are we using this data ethically? When thinking about ethics and data, we have to think about these questions:

Where is the data you’re looking to use coming from? Is it data from your company database, the Web, or a public data bank? Has the data been sourced in the most transparent way considering user privacy? Did users opt-in to having their data used for model development? Is the data representative of the subgroups of interest?   

How you source your data, combine, and use that data to train models will impact how your models behave downstream. We’ve seen how the underlying data used by algorithms can impact model behaviors.

Not to forget, if you’re using third-party models…the same applies. How the third-party vendor sources data, combines and uses that data to train their models impacts YOUR downstream applications. 

Bottom line: data etiquette applies to both custom-developed and third-party models you’re building on.

EXPLAINABILITY 

AI explainability is the ability for AI systems to provide reasoning as to why they arrived at a particular decision, prediction, or suggestion. The explainability of AI systems may not be critical for many AI applications, such as email spam filtering, grammar correction, and product recommendation systems. 

However, in domains such as healthcare, law enforcement, and any domain where a person’s life, livelihood, and safety are at stake, evidence and explainability are crucial for building trust with AI systems.

For example, if an AI system predicts that a patient has a high risk of lung cancer, why did it arrive at that prediction? The insights into the AI’s decision-making will help the physician decide if the recommendations are trustworthy.   

When it comes to AI explainability, you need to ask if your system must be explainable, and if it does, are you able to gain a glimpse into its reasoning? 

USAGE ETIQUETTE

AI usage etiquette relates to how you’re integrating AI into a workflow. Is it a sole-decision maker, a human assistant, or a second opinion? How you employ AI can make a massive difference in the risks to users and society when AI  gets it wrong.

For example, when you use AI to sift through emails to filter out spammy ones automatically, it’s the sole decision maker. Allowing AI to be the sole decision maker in this scenario can be considered low-risk if it makes a wrong decision. Spam may end up in your inbox, or valid emails may get filtered out as spam. Nevertheless, you can still label specific emails in your inbox as spam or browse through your spam emails.

However, when you take an application area such as medical diagnosis and treatment planning, asking the AI system to solely make decisions about a cancer treatment plan for a patient is a HUGE risk. Who is to blame if the treatment plan is ineffective by solely trusting the AI tool? The physician? The AI tool or the hospital that decided to employ AI in the first place? 

Further, usage etiquette is also a function of model accuracy. Using a low-accuracy model in a high-risk situation poses a higher risk than a high-accuracy model in a low-risk situation. 

Considering all of this, the question to ask here is: what are the risks of employing your AI tool in the way you envision, given its current performance? Ideally, you want it to have as few negative implications as possible on people when it comes to their safety (physical and cyber) and livelihood. If the risks are high, ask if the risk is worth taking. 

DEVELOPMENT RISKS 

In some cases, the development of an AI tool itself can cause unwanted trouble, even if not intended. For example, by developing an AI tool that can guess login passwords as an interesting R&D problem can cause undesirable consequences when it lands in the hands of a bad actor. 

It’s one thing if law enforcement is developing such a “dangerous” tool in a constrained environment to catch predators. It’s another thing altogether if the development team intends to open-source the tool, essentially distributing it to the public. The latter can have many unintended consequences, and developers should be responsible for not just how they will use the tool but also how they share the tool.

The question you want to ask when it comes to development risks is: Have you considered the risks of developing your AI tool using the intended distribution methods?

Last Word

As AI becomes increasingly integrated into our lives, we must consider its ethical ramifications. In this article, we specifically explored what AI ethics means in creating and using AI-powered tools and the questions to consider for each ethical element.    

In summary, there are four broad considerations when it comes to AI ethics, and they are:

Data Etiquette: Is the quality of data used to train models known? Explainability: Can you intuitively explain the AI’s decisions? Usage Etiquette: Have you considered how AI will fit into your workflow and what the risks are in that scenario? Development Risks: Have you considered the broader consequences of developing and distributing your AI tool?

Each of these considerations focuses on a different angle of how AI can potentially cause harm. In a future article, we’ll explore some of the common ethical challenges of AI systems.


Keep Learning & Succeed With AI

Join my AI Integrated newsletterwhich clears the AI confusion and teaches you how to successfully integrate AI to achieve profitability and growth in your business. Read  The Business Case for AI to learn applications, strategies, and best practices to be successful with AI (select companies using the book: government agencies, automakers like Mercedes Benz, beverage makers, and e-commerce companies such as Flipkart). Work directly with me to improve AI understanding in your organization, accelerate AI strategy development and get meaningful outcomes from every AI initiative.

The post AI Ethics Series: What is AI Ethics appeared first on Opinosis Analytics.

AI Ethics Series: What is AI Ethics


View Entire Post

Read Entire Article