This article was co-written by AI experts David Schuler & Rami Heera
Imagine this scenario: Your team has just deployed an AI-powered chatbot that will automate routine customer support requests. However, a few hours after release, you see reports online of the chatbot hallucinating and sending combative and offensive messages to your customers. Despite countless hours of testing, you’re now scrambling to take it offline, fearing further reputational and financial damage.
This is the type of scenario that AI Governance is designed to prevent.Â
Without it, even well-tested AI systems can pose significant risks, leading to unintended consequences and lasting damage. In this blog, we’ll explore the basics of AI Governance and its importance in developing safe, responsible AI systems.
What is AI Governance?
AI Governance is a set of standards, policies, and processes that ensure AI systems are responsibly developed. Its goal is to mitigate the risk of AI while still harnessing all of its potential value.
There have been countless examples of newsworthy AI mistakes over the past few years. From gender bias in hiring algorithms to chatbots selling vehicles for $1 to unsafe treatment recommendations in AI healthcare solutions – all industries implementing AI systems are at risk of legal and reputational damage.
Having a governance process ensures that bleeding-edge technology is properly reviewed before it’s implemented in systems. The goal is not to stifle innovation but rather to ensure that it continues securely, responsibly, and can scale.Â
Mitigating the following risks is important to achieve this goal:
Undiscovered security vulnerabilities
Being untested at scale
Instability of vendors, potentially leading to re-implementation costs
Duplication of effort when the same problem is solved using different patterns
Conversations on AI Governance are gathering steam globally, with perspectives from the World Economic Forum and UN, amongst others, on what is most important for economies, industries, and organizations worldwide. To support these perspectives (amongst others), frameworks are available that lay out areas to focus on implementing strong AI Governance as their foundational pillars.
phData’s AI Governance Framework
phData brings our perspective based on our lived experiences implementing ML models and AI capabilities across various clients and industries and a best-of-breed approach based on what we have seen work. We help organizations identify what is most critical for them.
Here are the pillars of our framework:
Together, these pillars cover the dangers associated with deploying AI at scale and instill confidence both in the outputs of your AI capabilities and in your consumers who benefit from, interact with, and provide inputs to help your AI capabilities evolve.
Transparency
While the hype around AI continues to gain momentum, more and more companies are releasing AI models into production. As this rapid innovation continues, algorithms will generate outputs and decisions that will impact the daily lives of people worldwide. This growing exposure, paired with the relative immaturity of this capability, will lead to questions about how those decisions were arrived at.
Transparency focuses on providing clarity on how models make their decisions. One practical way to improve transparency for your models is to perform an internal audit on the system. The audit can review which datasets were used, which features were important to the model, and ensure that the model’s output is explainable. Having a data and AI platform with features such as data lineage and model tracking will help to enable these audits.
Accountability
Accountability is a key pillar in AI Governance. It ensures that organizations are responsible for their systems’ ethical, legal, and societal impacts. This concept covers everything from regulatory compliance to ethical obligations.
Legal and regulatory compliance is one critical aspect of accountability. As AI regulation rapidly evolves, organizations must stay informed and responsive to new policies to protect consumers and avoid legal repercussions. For example, the state of California recently signed the California AI Transparency Act, which will impose a number of requirements on AI systems, such as providing an AI detection tool and implementing metadata to denote AI’s role in generated content.
Another accountability aspect is informing consumers of AI’s involvement in the application. By disclosing this information, you build trust with your consumers and mitigate future risks of reputational damage. If you’re using any user-provided data for further training or improvement of the system, asking for consent of the user’s data is good practice, which can help further build that trust and ensure consumers don’t have their data used without their knowledge.
Privacy and Security
Well-documented security standards are critical for AI Governance. These standards should include familiar techniques such as access control, authorization, and authentication, which should be applied to the data, services, and models that make up your AI system.
In addition, AI applications bring new security considerations, such as prompt injection and excessive agency, which must also be accounted for. Prompt injection occurs when a bad actor attempts to bypass restrictions, gain access to sensitive data, or use clever prompting strategies to access your natural language model.Â
It’s important to employ defensive measures such as guardrails to identify and reject any prompt that attempts to perform malicious actions against your system. Implementing an AI Gateway is a standard approach for enforcing standard guardrails or filters on all AI applications deployed in your environment.
LLM-based systems may also be able to interact with other systems and services in reaction to a prompt. Excessive agency is a vulnerability where the LLM has been given too broad of access, which can result in unexpected consequences. It’s important to narrowly scope the permissions of the LLM and only provide access to systems necessary for the task at hand.
Equity
Building an equitable AI system means it is free from bias, represents diverse scenarios in your data, and is generally fair for all groups of individuals.
Here are some questions you can ask when thinking about how to ensure your system is equitable:
Are the datasets used to train models representative of underrepresented groups?
Are the models likely to perpetuate or enhance structural bias?
Could an AI model lead to a perception of bias?
When training your own model, these questions are generally easier to answer when compared to using a Foundational Model provided by a third party. For example, the early releases of Google’s flagship model, Gemini, were plagued with racial bias. Consumers of Gemini would also have that bias leaked into their system, so it’s critical to properly vet and test these model providers and also use techniques like guardrails to control the input/output of the model or RAG to ensure the LLM is grounded on your company’s data.
Risk Management
Models embedded in AI systems are inherently imperfect, yet we rely on them to make decisions. Organizations need a framework for identifying risks and deciding when to mitigate vs when to accept.
There are various types of risks when evaluating an AI application. These risks can touch on each pillar defined above, including ethical risks such as bias and discrimination, operational risks such as security vulnerabilities and data leakage, legal and regulatory risks, and reputational risks.Â
Once all risks have been identified, they need to be evaluated by addressing the likelihood and severity of the risk. From there, organizations must have some known tolerance to accept or prioritize work to mitigate the risk.
phData positions these as the most critical areas of focus for AI Governance, which should be grounded in the reality of your organization and your industry.
Harmonizing Governance and Innovation
With all of the excitement about AI, it can be tempting to go as fast as possible to get AI features out in front of your customers. Before releasing that feature, though, you must have AI Governance standards in place so you don’t end up in the news for the wrong reasons.
Governance should not stifle innovation.
Organizations should empower teams with sandbox environments where it’s safe to experiment with various AI technologies. Once an idea is proven and the real implementation begins, governance plays a role.Â
Well-defined governance best practices can speed up certain parts of the implementation process since each team does not spend time defining best practices for each of the governance principles. Instead, they can simply refer to and adhere to the best practices defined by AI Governance.
With strong governance in place, the boundaries of scalable and innovative AI capabilities become clear. This fosters greater consumer trust in your product or service experience, driving both market share growth and revenue. In turn, this trust encourages reinvestment in your foundational AI systems, ensuring sustained innovation and responsible development.
Where Can You Start?
By aligning and grounding on the five pillars outlined above, your organization can start to create an AI Governance Framework that ensures your AI Applications are built responsibly. Here’s some quick start next steps on where to get started:
Establish a cross-functional AI Governance team.
Set standards and best practices for each pillar defined above.
Build out a review process to ensure AI applications are adhering to standards.
Monitor AI applications in production to gather data on user behavior and potential misuse of the system.
Periodically re-review applications to confirm they’re behaving as expected and review any updated standards and guidance.
Iterate on your foundational governance principles.
This space will continue to evolve as we collectively learn more about the technology and the technology itself changes shape.
If you’d like to learn more about building scalable AI Governance for your organization, attending one of our free Generative AI Workshops is a great place to start. These 90-minute sessions are a fantastic resource for getting direction, answering questions, and getting real-world advice from experts passionate about using AI for good.