Master AI Compliance: Join Our Expert-Led Webinar on June 13th
Days Hours Minutes
Skip to content

AI Governance: An Essential Piece of the Compliance Puzzle

June 27, 2024

Mrinal Manohar


“AI regulation: It's not an abstract possibility, it's a concrete reality,” said Kay Firth-Butterfield, CEO of Good Tech Advisory and the world's foremost AI ethicist in our recent webinar on navigating the evolving AI regulatory landscape. 

Alongside Kay and Shyam Nagarajan, Executive Partner at IBM Consulting, I had the privilege of discussing how governance tools can address these emerging compliance challenges. As businesses navigate the risks and rewards of AI, the stakes are rising. From negative publicity to lawsuits and hefty fines, it’s clear that robust governance frameworks are no longer just value-adds but essential components of the larger AI stack. 

“When we're thinking about governance,” Firth-Butterfield shared, “we really need to think about multi-party access and an independent certification.” These capabilities create the transparency required to establish trust in AI training and usage, she said. 

Our conversation also focused on our recently-introduced Prove AI tool, an AI governance solution that enhances transparency, access, and auditability for AI datasets, from training and fine-tuning data to inputs and outputs. By creating an auditable and tamper-proof record of AI data on the Casper blockchain, Prove AI can be augment AI governance frameworks—-specifically IBM watsonx.governance—and enable organizations to create truly trustworthy AI. 

Growing maturity in AI governance and regulation

Though it may still be early days in the regulatory environment, the complexity of AI regulation is rapidly increasing as AI becomes more integrated into daily life and business. The EU AI Act, often referred to as AI's GDPR moment, is a significant milestone, setting comprehensive standards for AI deployment within the EU. Similarly, other regions are advancing their regulatory frameworks. In the US, various states are taking action at different rates, with Colorado’s AI Act leading the way. 

In the absence of a global regulatory consensus on AI, businesses face both challenges and opportunities. While the EU AI Act provides a detailed blueprint, regions like China have enacted a myriad of specific regulations targeting different aspects of AI, from algorithmic recommendations to data security. Understanding these diverse regulatory environments is crucial for all businesses operating on a global scale. Yet with near-constant new developments, often tailored down to specific use cases, it can be hard to keep up. It is increasingly clear that businesses need comprehensive, reliable AI governance tools and partnerships to ensure their technology stays abreast of regulatory measures. 

Understanding risk in AI governance

As AI models become integral to organizational operations, responsible AI is increasingly critical. These models are embedded across diverse platforms, from cloud services like Azure and AWS to on-premise systems and tools like Zoom and Outlook. This ubiquity makes risk management both essential and complex. 

Currently, businesses face three main types of risk when using AI tools: reputational risk, organizational risk, and regulatory risk. 

The most pressing and dynamic category is regulatory risk. The fragmented approach to Ai regulation exposes businesses to noncompliance and hefty fines. Amid this shifting landscape, AI governance solutions cannot be one-size-fits-all, as each organization’s requirements vary based on industry, geography, and specific use cases.

Reputational risk is straightforward: AI errors can severely damage an organization’s repudiation, associating it with prejudiced perspectives. Firth-Butterfield highlighted the critical issue of bias in AI systems, which poses significant ethical and practical challenges, particularly in applications like hiring algorithms and financial services. Addressing these biases requires a concerted effort to diversify the data used in AI training and implement rigorous fairness and accountability measures. 

“Every organization has always needed an AI governance strategy, but none so much as now,” Firth-Butterfield emphasized. 

To be effective, Nagarajan added, a governance solution must be adaptable, enable risk assessments, and support management of governance workflows across multiple stakeholders. It should capture model metadata for auditability and transparency and monitor metrics such as accuracy, drift, bias, fairness, and discrimination. Moreover, it must be usable across the entire organization. 

From principles to practice in responsible AI

Establishing a framework for the safe and ethical development and use of Ai is crucial for organizations to ensure adversarial robustness: the confidence that AI systems are resilient against tampering and manipulation. Similar to financial audits, this practice verifies not just the integrity of Ai models but also their inputs, ensuring that no bad actors have compromised any part of the platform, thereby laying the groundwork for trust. 

As I discussed in the webinar, Casper Labs has partnered with IBM to launch Prove AI, addressing the current regulatory environment and propelling AI innovation forward safely and securely. 

Prove AI leverages blockchain technology to ensure tamper-proof records of data quality, integrating seamlessly with existing governance frameworks. This tool not only meets current regulatory requirements but also prepares businesses for future compliance challenges as Ai regulations evolve, thanks to its flexible and transparent nature. 

Meanwhile, watsonx.governance provides an integrated framework to manage the end-to-end lifecycle of AI models, supporting multi-stakeholder collaboration and ensuring compliance and risk management. Prove AI extends watsonx’s capabilities by introducing tamper-proof, auditable version control, and multi-stakeholder attestation for data quality—crucial for enterprise-level AI model deployment yet flexible enough for a range of use cases. As Nagarajan shared, “Your governance solution needs to adjust to your specific situation.”

The future of AI governance

As AI continues to permeate various sectors, businesses must prioritize transparency, fairness, and accountability in their AI systems. Only comprehensive, adaptable, and secure governance solutions can enable organizations to harness AI's full potential while maintaining trust and accountability. By adopting comprehensive governance solutions like Prove AI, organizations can not only comply with current regulations but also build a foundation of trust and reliability for the future. 

As we discussed, effective AI governance does more than ensure compliance and mitigate risks: it accelerates innovation. By providing robust governance frameworks, companies can confidently develop and deploy AI technologies, knowing they can quickly address issues as they arise. 

3 Key takeaways

  • AI governance is now a business necessity, not a luxury.
  • AI regulations are already being actively enforced, and more regulations are coming.
  • To achieve AI trust, organizations need to start with ethically-sourced, secure training data.

You can view the full webinar here.