Organizations looking to scale their use of AI-enhanced decision-making are facing a dilemma. There is still a large gap between voluntary frameworks for responsible AI and actionable law and enforceable regulations. I wrote about Singapore’s Model AI Governance Framework in an earlier blog post. My colleague Guannan Lu has recently taken a look at China’s new data and AI regulation rules, and the European Parliament is currently debating its first law on AI regulations. Further, there is a wealth of well-established, codified rights and regulations on data privacy, and Forrester has created a global map of privacy rights and regulations. But there are currently no commonly accepted, trusted approaches to integrate responsible AI into an organization’s business operations. Therefore, a challenge exists for organizations looking to implement responsible AI guidelines and technology risk management. They are left to their own devices when attempting to realize current frameworks across their corporate operations.

GAAIP Backed By Industry Consortia Can Bridge The Trust Gap

I believe that a set of generally accepted AI principles (GAAIP) can bridge the gap between nonbinding, voluntary guidelines and the actionable laws and regulations that may be implemented at a later stage — in very much the same way that generally accepted accounting principles, or GAAP, create trust for corporate accounting standards. For example, US law requires businesses releasing financial statements to the public and companies publicly traded on stock exchanges and indices to follow GAAP guidelines. But the GAAP themselves are not part of legislation. Following this example, organizations adhering to the GAAIP when scaling up their AI-enhanced decision-making could be awarded with a seal of trust. My colleagues emphasize that “trust is a business imperative.” The research highlights that, while organizations can leverage technology to increase trust with their partners, customers, and employees, poorly managed technology risks can hinder trust. The trust seal would help organizations reduce the negative impact of these risks. Such a trust seal could, for example, be issued through a consortium of regulatory institutions and industry associations, along with technology providers and consulting firms. The Veritas Consortium, led by the Monetary Authority of Singapore, does exactly this already. Should the GAAIP prove successful, they may be referenced by (or incorporated into) future AI and algorithms legislation efforts.

GAAIP Can Help Organizations Align With Peers When Realizing Responsible AI

Nearly all guidance frameworks and legal initiatives feature a core set of principles for responsible AI. They all revolve around fairness, ethics, accountability, and transparency. These principles can form the core of the GAAIP. They can be further broken down by industry or domain to provide organizations with the best possible guidance. This will allow organizations to start realizing responsible AI — not as lone rangers, but aligned with peers in their industry, locally and globally.

Industry-focused collaboration platforms such as the Veritas Consortium are a necessary intermediate step. They bridge the gap between the voluntary nature of recommendations and guidelines and legislative and regulatory rules still on the horizon. It is necessary for stakeholders in any vertical to collaborate and translate the responsible AI guidelines and recommendations into actions and regulations that work best within that vertical. This will codify the generally accepted principles for the responsible use of AI.

Organizations that are still in a waiting state about leveraging AI at scale would benefit the most. The GAAIP will give them the confidence they need and allow them to take their steps in alignment with other organizations and entities within their industries.

GAAIP — An Invitation To Discuss

Further work is needed to fully implement the technology risk management dimension for AI. This will determine the cost versus value balance (i.e., help organizations understand what are the costs of being a responsible, ethical business and the processes and people required to successfully scale AI while managing the risk).

Technology risk management for AI will be a focus area for my research in 2022, and I’d be delighted to hear your comments and thoughts. To discuss, please connect with me on LinkedIn! As a Forrester client, if you are keen on a discussion on technology risk management for AI, please don’t hesitate to schedule an inquiry call with me.

I’m looking forward to engaging and exciting discussions!