Advertisement

Gaining and Keeping Trust in the Age of AI: A Five-Point TRUST Framework for Deploying AI

tashatuvango-stock.Adobe.com

Generative AI technology is progressing at breakneck speed and is predicted to change numerous aspects of our daily lives. With the wide availability of large language models (LLMs)-powered chatbots, it’s never been easier for any one or entity to create new content and media.

With human-like abilities to write essays, poetry and prose, these advanced AI tools are enabling organizations of all stripes to generate and distribute content and media at unprecedented speed and near zero costs. End users are bombarded with content and media on all media and communication channels and are overwhelmed and starting to tune out.

As brands try to navigate this strange new world where the cost of content generation and distribution are near zero, they are faced with a fundamental dilemma at the heart of it all: How to take advantage of these game changing technologies while gaining and keeping the trust of their customers? 

Unlike other operational metrics, trust is intangible and gained over time but lost in an instant. Trust in a brand or organization is the foundation on which all operations rest, and it might be the most human of traits that cannot be fully automated away. Counterintuitive as it may seem, cheaper and plentiful content does not engender trust, and arguably might erode it. In my experience, the brands that have done the best at building and maintaining trust in this new age have embraced the following TRUST framework when deploying AI:

Advertisement

  • Transparency on data and systems: The first tenet of building trust is to be extremely transparent about how brands capture and use customer data. Giving full visibility into the lifecycle of data from collection, processing and utilization of that data by their AI systems or tools goes a long way in establishing a shared sense of intent and purpose. In addition, brands should give control back to their customers in opting in and out of specific tools or communications while complying with all regulatory requirements like GDPR and CCPA. 
  • Reliable and Safe AI:  Not all AI tools are the same, so brands should prioritize use of AI tools that can reveal and explain their data sources, practice AI safety, publish verifiable benchmarks and explain what pieces of customer data, if any, were used to arrive at specific outputs.
  • Understanding with human touch: While AI-powered automation will be a key ingredient to operational excellence, it should be combined with human touch and oversight to prevent inadvertent harm or bias. Without human touch, the AI systems can inadvertently drift away from stated objectives and cause irreparable harm at unprecedented speed and scale. And when damage happens there would be nothing like human touch to understand the fullness of the customer’s situation and redress harm. 
  • Shared values: Brands should vet their AI tools and systems not only for their operational metrics but for alignment with their values and purpose, and communicate that back to their customers in a transparent manner. Modern media tools allow brands to easily include their shared values and what they are doing to practice them daily.
  • Think long term from the beginning: Trust is not gained in one moment or day but can be lost in an instant. Brands should carefully design processes, communications and experiences and thoughtfully include AI systems that prioritize long term relationships with customers. Brands should operate AI systems with objectives that optimize long-term metrics by design. AI systems that only track and optimize short-term conversions or revenue are unlikely to optimize for long-term retention and lifetime relationships.

In summary, Generative AI tools can create strategic opportunities for brands that practice transparent and sustainable ways of using them, and help them gain and keep trust and deepen relationships with customers leading to long-term growth. As the dynamic landscape of AI continues to evolve, it’s the brands embracing the TRUST principles that set themselves apart as reliable, customer-focused enterprises, strengthening the connection between technology and human values.

Brands that neglect this framework when using AI may face challenges in building and maintaining strong relationships with their customers, potentially facing issues such as reduced customer confidence, privacy concerns and even legal trouble if customer data is mishandled.

In essence, these brands are orchestrating the integration of AI, aligning its capabilities with the values and needs of people. Through this framework, brands can foster a synergy between AI automation and human values, culminating in a sustainable partnership that stands the test of time.


Manyam Mallela is the Co-founder and Chief AI Officer at Blueshift and was previously employee #1 at Kosmix, which was acquired by Walmart to become Walmart Labs. He was also Senior Director of Engineering at Walmart Labs and is a Graduate of UT Austin & IIT Bombay.

Feature Your Byline

Submit an Executive ViewPoints.

Featured Event

Join the retail community as we come together for three days of strategic sessions, meaningful off-site networking events and interactive learning experiences.

Advertisement

Access The Media Kit

Interests:

Access Our Editorial Calendar




If you are downloading this on behalf of a client, please provide the company name and website information below: