AI EXPERT-LED LIVE EVENT | recording
An exclusive AI Governance webinar featuring a leading advisor in AI and regulatory affairs who will share insights on the EU AI Regulatory Act.
Hosted by internationally renowned AI policy expert and TIME Impact Honouree, Kay Firth-Butterfield, this event features Casper Labs CEO Mrinal Manohar and IBM Consulting’s Executive Partner Shyam Nagarajan. They will tackle the pressing compliance challenges of the EU AI Act and the AI Accountability Act, providing a robust framework for risk management and best practices.
Watch live event recording now to gain invaluable insights and strategies from industry leaders.
As AI-centric regulations proliferate around the world, establishing a clear AI governance strategy has emerged as a board-level priority. With some provisions of the EU AI Act going into effect later this year, organizations worldwide are feeling a new urgency to increase visibility into and control over their AI deployments. On June 13, a panel of experts will introduce a framework to better understand how to identify and define your risk vectors, and key steps to take towards realizing a good governance program for AI.
CEO | Good Tech Advisory LLC
Kay has been leading the development of Responsible AI since 2014. She was the world’s first C-Suite appointee in AI Ethics (2014), a contributor to the Asilomar Principles (2015), Vice Chair of the IEEE Global Initiative for Ethically Aligned Design (2015) and the inaugural Head of AI and Machine Learning at the World Economic Forum (2017-2023). As CEO of Good Tech Advisory LLC, she advises and speaks to organization and audiences around the world.
Co-founder and CEO | Casper Labs
Mrinal is co-founder and CEO at Casper Labs, where he’s focused on building and scaling a team that’s structured for the long term. He possesses a unique mix of operational and engineering experience and holds an M.S. from Carnegie Mellon University. A seasoned operator, Mrinal previously led the technology, media and telecommunications sector at Sagard Holdings, a multi-billion dollar alternative asset manager, and worked in the North American Private Equity for the TMT sector at Bain Capital prior to that.
Global Partner | Responsible AI at IBM
As an executive partner and global practice leader at IBM Consulting, Shyam has over 20 years of experience in creating new business models and better outcomes by applying blockchain and AI technologies. He advises clients on employing tokenization, decentralized identity, and provenance to accelerate business outcomes in various industries, such as capital markets, banking, energy, retail, media, and entertainment. He also serves as a board member for both Hedera and the Hyperledger Foundation.
Yes – thanks to the proliferation of AI-focused legislation around the world, including the EU AI Act and the California AI Accountability Act, organizations are facing a new urgency to define and implement clear governance strategies to ensure their AI data management practices are in line with emerging regulations.
Blockchain’s distributed ledger features make it prohibitively difficult for third-parties to tamper with data once it’s been recorded. There is no single point of failure, meaning that organizations can reliably audit highly serialized data sets and readily confirm whether or not data has not been altered – and if it has been, determine when, and by whom. This feature is crucial for maintaining the integrity of data used by AI, particularly in industries where high-risk AI applications are a regular occurrence.
This enhanced visibility can enable more efficient internal workflows, as well as help demonstrate regulatory compliance. Storing AI algorithms and models on the blockchain also enables third parties to audit and verify the AI’s fairness, safety, and non-discrimination, which are essential aspects of compliance with the AI Act.
These benefits don’t just apply to AI systems that are higher up on the risk-scale. Blockchain-powered smart contracts can be useful for those with “limited risk,” too. Self-executing contracts can automate compliance processes within AI models, enforcing data protection and privacy standards specified by the AI Act, including the requirement for explicit consent for data processing, or the right to explanation for automated decisions.
Learn how Casper’s new governance tool can help your business keep up with new regulations.
Any AI expert will tell you that the technology’s transformative benefits come with significant risks. From rising data privacy concerns, to the proliferation of deepfakes and misinformation, to recurring hallucinations among chatbots, businesses and consumers alike are growing increasingly aware of where AI systems today fall short.
Because of this, policymakers in the European Union (EU) saw the need to create comprehensive guardrails in this evolving market. On March 13, 2024, the EU parliament passed the landmark AI Act. It was the first time in which an extensive set of regulations was approved for the purpose of governing AI-human interactions.
At its core, the EU AI Act is a document that outlines the levels of risk for AI applications across industry use cases, and establishes standardized measures to monitor and report AI-related threats to human wellbeing and fundamental rights. Now, companies are able to evaluate their own AI products against specific criteria to ensure a safer, more ethical user experience.