In about a month, I will have the honor of presenting my research on “The Ethics Of AI: How To Avoid Harmful Bias And Discrimination” to attendees at CX Europe 2018. The event will take place in London on November 13 and 14 and will feature several of my esteemed analyst colleagues, as well as speakers from Bang & Olufsen, Credit Suisse, Kindred Group, MoneySuperMarket, and many other companies.

My presentation will describe the different ways harmful bias can infect the machine-learning models that act as the brain of most AI systems. More importantly, I’ll prescribe ways companies can avoid these types of biases, both from a technical perspective and a broader organizational one. Make no mistake: There is no easy fix. Machine-learning models are inherently discriminators — that is, they identify useful patterns and signals in data to segment a population. But in a world where GDPR looms large and values-based consumers shift loyalty based on a brand’s ethical scorecard, firms need to make sure these models aren’t discriminating against customers based on gender, race, ethnicity, age, sexual preference, religion, or in similarly harmful ways. This is where I believe CX professionals need to play a key role: As an advocate for the customer, you should make sure that AI isn’t having a disparate, harmful impact on anyone.

We are at a pivotal moment as a species. We can either use AI for good or allow it to cement and reinforce past injustice. If we are lazy, it will do just that. But if we are thoughtful and vigilant, AI can have a positive impact on all people — at least, that is my hope.

So please join me, my colleagues, and other CX professionals next month in London. Together, we’ll take a step toward creating a more just future. I look forward to seeing you there!

Register here