Master AI Compliance: Join Our Expert-Led Webinar on June 13th
Days Hours Minutes
Skip to content

LIVE EVENT | recording + SLIDES

Master AI Compliance

Establishing AI Governance to Meet Compliance Challenges

A webinar featuring IBM, Casper Labs and Kay Firth-Butterfield, Founder & CEO of Good Tech Advisory, and an internationally renowned leading advisor in AI and regulatory affairs.

1
00:00:04.130 --> 00:00:15.170
Matt Coolidge: Hi, everybody this is Matt Coolidge at Casper Labs. Welcome to establishing AI governance to meet compliance challenges. We are. Gonna give folks just about 60 seconds here to join, and then we will kick things off.

2
00:00:37.170 --> 00:00:42.510
Matt Coolidge: That's a pretty short 60 seconds, but I see folks are joining quite quickly, so we will kick things off

3
00:00:42.610 --> 00:00:49.900
Matt Coolidge: again. Welcome to establishing AI governance, to meet compliance challenges. I'm happy to introduce today's expert panel

4
00:00:50.220 --> 00:01:08.649
Matt Coolidge: as businesses adapt to the realities of the AI era. Regulators from around the world are stepping in to ensure AI is being employed in a fair and ethical manner, and they're introducing significant penalties for those organizations that fail to comply. Today we will explore these emerging regulations in depth and introduce a framework for compliance.

5
00:01:08.700 --> 00:01:19.469
Matt Coolidge: Joining us today on our panel are good tech advisory, CEO. K. Firth, Butterfield, the world's 1st AI ethicist and a trusted AI advisor for both public and private sector organizations around the world.

6
00:01:19.660 --> 00:01:24.549
Matt Coolidge: Manul Manohar, Casper Lab, CEO, and a pioneer in the emerging AI governance market

7
00:01:24.600 --> 00:01:30.789
Matt Coolidge: and sham Nagarajan executive partner at Ibm. Consulting where he helps to oversee its responsible AI practice.

8
00:01:30.940 --> 00:01:39.909
Matt Coolidge: Note that we will be reserving the last 10 to 15 min of this webinar for a live Q&A session feel free to input any questions into the Zoom chat over the duration of this presentation.

9
00:01:40.050 --> 00:01:43.419
Matt Coolidge: With that I am pleased to hand things over to Kay to kick us off.

10
00:01:44.430 --> 00:02:07.329
Kay Firth-Butterfield: Thank you, Matt, and hello to you wherever you are in the world today. Thank you for joining us. Perhaps we could go to my 1st slide, Matt, the AI era, is here. We know that we, seeing rising, demand, steadily climbing AI budgets and some established use cases particularly around business Ops and cyber security.

11
00:02:07.480 --> 00:02:11.720
Kay Firth-Butterfield: but huge benefits next slide

12
00:02:13.705 --> 00:02:23.520
Kay Firth-Butterfield: create some some risk, and we are seeing that already there are lawsuits hitting the news.

13
00:02:23.700 --> 00:02:36.190
Kay Firth-Butterfield: And what that tells you is that even if there is not actually AI regulation in your country, there is existing law which is being used

14
00:02:36.380 --> 00:02:55.650
Kay Firth-Butterfield: so next slide, with all the great opportunities that we see should come responsible. AI in your company. Because the Regulators are beginning to step in. And here are just some examples of that.

15
00:02:56.237 --> 00:03:13.050
Kay Firth-Butterfield: So what is the state of AI regulation at the moment? I think that AI regulation is sort of entering, entering a new phase of maturation. So we know that when cars were 1st introduced

16
00:03:13.120 --> 00:03:37.649
Kay Firth-Butterfield: we they didn't have much regulation. They went slowly, but as they sped up, and as they gave us these huge benefits that we have from cars, we also found that increasingly there were harms, and it seems that that's actually where we are with AI at the moment. And of course everybody's talking about the EU AI act

17
00:03:38.122 --> 00:03:57.510
Kay Firth-Butterfield: as AI is Gdpr. Moment. And so let's look in more detail about some of the regulations around the world. And just on this slide alone. Although I have taken the definition of an AI system from the EU AI act

18
00:03:57.600 --> 00:04:15.450
Kay Firth-Butterfield: and the Oecd actually, you can see at the bottom all of these Regulators in the Us. Have already formulated plans, or are doing something in relation to governance of AI

19
00:04:15.490 --> 00:04:24.950
Kay Firth-Butterfield: so quickly. What is an AI system according to Europe and the Oecd. An AI system is a machine-based system that

20
00:04:25.000 --> 00:04:28.379
Kay Firth-Butterfield: for explicit or implicit objectives

21
00:04:28.440 --> 00:04:39.330
Kay Firth-Butterfield: infers from the input it receives how to generate outputs such as predictions, content, recommendation

22
00:04:39.360 --> 00:04:41.220
Kay Firth-Butterfield: or decisions

23
00:04:41.250 --> 00:04:46.120
Kay Firth-Butterfield: that can influence physical or virtual environments.

24
00:04:46.500 --> 00:04:56.780
Kay Firth-Butterfield: And as you think about everything that you hear, maybe keep that definition in mind. Can we move on to the next slide, please?

25
00:04:57.170 --> 00:05:16.930
Kay Firth-Butterfield: So is there a global regulation? Well, there, there is not. We have been trying to get some sort of agreement about lethal autonomous weapons for probably the last 7 years now without any without anything coming out.

26
00:05:16.930 --> 00:05:33.139
Kay Firth-Butterfield: However, on the 17th of May 2024, and I think that people don't realize this. Say much, because conversations are dominated by the EU AI act. The Council of Europe actually created a legally binding treaty.

27
00:05:33.200 --> 00:05:46.399
Kay Firth-Butterfield: It is expected that not only the EU States will sign it, but also the 12 other States that were part of the consultative process which includes the Us. A.

28
00:05:46.580 --> 00:06:06.379
Kay Firth-Butterfield: All these other actors that you see on the slide are working around responsible AI on the global stage. So you will find great repositories of work to help you think about responsible AI from the World Economic Forum, from the Oecd

29
00:06:06.380 --> 00:06:17.459
Kay Firth-Butterfield: and from Gpay. The United Nations itself is working at the moment and has a high level consultation group working on AI.

30
00:06:18.139 --> 00:06:24.180
Kay Firth-Butterfield: So let's just go to some data problems with AI.

31
00:06:25.470 --> 00:06:27.620
Kay Firth-Butterfield: We're back one. Thank you.

32
00:06:27.780 --> 00:06:41.420
Kay Firth-Butterfield: So, although large language models are withers and causing enormous amounts of excitement. Actually, we only have 2 million Gpt users,

33
00:06:42.130 --> 00:06:58.970
Kay Firth-Butterfield: and probably more through enterprise. Gpt, but that disproportionately represents the number of people in the world. So, for example, 3 billion people are unconnected.

34
00:06:59.455 --> 00:07:28.589
Kay Firth-Butterfield: We also have connected com communities who are, in fact, not producing very much data. And just to give you a really good example, in the Uk, which you would think was fairly forward, leaning on using Chat Gpt. A recent Oxford study showed that 9% of people in the Uk use chat Gpt, so that's only 9

35
00:07:28.590 --> 00:07:33.900
Kay Firth-Butterfield: and only one in 10. Britons have actually even heard of Chat Gpt.

36
00:07:34.711 --> 00:07:39.630
Kay Firth-Butterfield: what we're what we know is that large language models?

37
00:07:40.183 --> 00:07:49.459
Kay Firth-Butterfield: Use the data that has already been created while most of that was created in the global north by men.

38
00:07:49.490 --> 00:08:01.240
Kay Firth-Butterfield: And what we know also is that 46% of the users of Chat Gpt, for example, are white guys based in the United States.

39
00:08:01.290 --> 00:08:18.609
Kay Firth-Butterfield: What does that do to bias issues in large language models? Well, it means that where there is little representation of global women in the, in the of women in the global north, or persons of color

40
00:08:18.730 --> 00:08:38.829
Kay Firth-Butterfield: as well as, of course, very little representation of people in the global South, and those 3 billion who are unconnected. What does that mean for the models? It means that unless we do something about this, we will end up with perpetuating

41
00:08:39.216 --> 00:08:53.919
Kay Firth-Butterfield: the way that white men think. And so I was at conference yesterday talking to some women, and we all felt that we should be using these models a lot to try and get our data into them.

42
00:08:53.960 --> 00:09:13.360
Kay Firth-Butterfield: But what that means, what does that mean for responsible? AI. Well, 1st of all, the world agrees on what is responsible. AI we just don't agree on how to deploy it. So we agree that it's bias, fairness, security, reliability and robustness.

43
00:09:14.448 --> 00:09:23.529
Kay Firth-Butterfield: privacy, explainability, transparency, human agency, accountability, lawfulness, sustainability and safety.

44
00:09:23.550 --> 00:09:41.009
Kay Firth-Butterfield: But let me just go back to that bias. The 1st thing. Obviously, if you're using the large language model, given the explanation of just given about data, then you have to be really, really careful about bias creeping into the models that you're creating from it.

45
00:09:41.060 --> 00:09:43.469
Kay Firth-Butterfield: So next slide, please.

46
00:09:46.832 --> 00:10:09.499
Kay Firth-Butterfield: I'm going to. Now look at what the world is doing in respect of legislation around AI, and you can go and look up all these places. I suggest the Iap P website as a good resource. But you know what's happening, for example, in Brazil.

47
00:10:09.690 --> 00:10:18.900
Kay Firth-Butterfield: Well, Brazil has already had preliminary and analysis of its bill on AI, which is Bill 2, 3, 3, 8,

48
00:10:19.040 --> 00:10:46.970
Kay Firth-Butterfield: and a final opinion say there is movement towards regulating AI in Brazil, but Brazil also has a their version of the Gdpr. A Civil Rights Act for the Internet and a consumer protection code. And it's developing a regulatory tool for AI along with the Bank of Latin America. And if we move to Asia

49
00:10:47.800 --> 00:10:49.600
Kay Firth-Butterfield: and Oceania.

50
00:10:51.410 --> 00:11:04.849
Kay Firth-Butterfield: obviously, lots of interesting countries here that all are developing their own regulatory systems. But I'm just going to talk about China and Singapore

51
00:11:05.080 --> 00:11:12.609
Kay Firth-Butterfield: and China actually has enforced a lot of regulation, perhaps more regulation than anybody else.

52
00:11:12.810 --> 00:11:16.259
Kay Firth-Butterfield: And I'm just going to highlight a few.

53
00:11:16.330 --> 00:11:21.159
Kay Firth-Butterfield: So the algorithmic recommendation management provisions.

54
00:11:21.390 --> 00:11:27.520
Kay Firth-Butterfield: interim measures for the management of generative AI. So services.

55
00:11:27.780 --> 00:11:31.140
Kay Firth-Butterfield: deep synthesis, management, provisions.

56
00:11:31.290 --> 00:11:35.649
Kay Firth-Butterfield: scientific and tech ethics, regulations

57
00:11:35.790 --> 00:11:41.630
Kay Firth-Butterfield: and data security law, personal information and protection law.

58
00:11:41.780 --> 00:11:57.649
Kay Firth-Butterfield: So whilst we tend to think that that China's moving forward without regulation. That's not correct. There's a lot of regulation that is business to customer rather than state to citizen.

59
00:11:57.830 --> 00:12:24.000
Kay Firth-Butterfield: And then, if we look at Singapore, I have particular affection for AI verify, which is the tool that they have been developing, because it actually started in my office when I was working at the World Economic Forum. It looks as if it's probably going to be influential on smaller countries in Asia. So take a look at that. If you're trading with Asia.

60
00:12:24.070 --> 00:12:27.619
Kay Firth-Butterfield: let's move on and see what meaner is doing.

61
00:12:27.690 --> 00:12:44.860
Kay Firth-Butterfield: Well, Saudi Arabia and the Uae don't have any AI laws in force. But there's a lot of going on in respect of AI for example, the Uae was the 1st country to have a ministry of AI.

62
00:12:44.860 --> 00:12:58.900
Kay Firth-Butterfield: And actually, there are many data laws in place in both of those countries, including in Saudi Arabia, a children and incompetence data protection policy.

63
00:12:59.230 --> 00:13:03.160
Kay Firth-Butterfield: Say, let's now look at what's happening in North America.

64
00:13:04.900 --> 00:13:11.610
Kay Firth-Butterfield: Well, we are already seeing that the we have the Biden executive order.

65
00:13:12.113 --> 00:13:34.810
Kay Firth-Butterfield: Nothing at Federal level apart from that. But we do have the NIST framework which I imagine many of you are using in pursuit of responsible AI and a lot of work that NIST is authorized doing to do, and is doing. Now in respect of the executive order.

66
00:13:35.561 --> 00:13:54.529
Kay Firth-Butterfield: We are also seeing that some of the big tech firms are issuing indemnity statements in respect of copyright issues. So if you're going to be using any of those tools, look for those indemnity statements and work out if they actually do cover you.

67
00:13:55.215 --> 00:14:11.279
Kay Firth-Butterfield: As I mentioned before, there are a number of regulators and existing laws. So we are seeing a number of cases by open A against open AI for copyright transgressions

68
00:14:11.785 --> 00:14:28.979
Kay Firth-Butterfield: air. Canada was successfully sued for its chat, but not getting things right. Recently. And once we're on Canada, the AI data active is making its way through Parliament as we speak.

69
00:14:29.010 --> 00:14:42.289
Kay Firth-Butterfield: But perhaps the best example here of what's happening in North America, and most States are beginning to create their own legislation, which causes a difficult patchwork for us to keep up with.

70
00:14:42.390 --> 00:14:49.769
Kay Firth-Butterfield: Colorado passed the consumer protections for interactions with Artificial Intelligence Act

71
00:14:49.810 --> 00:15:06.999
Kay Firth-Butterfield: that comes into force at the beginning of 2026, and it has been signed by the Governor in Connecticut. They had something similar which the Governor, threatened to veto. So it didn't recede.

72
00:15:07.130 --> 00:15:20.300
Kay Firth-Butterfield: But you should also remember California. California's got 30 bills on AI regulation at the moment in various states of proceeding through the legislature.

73
00:15:20.560 --> 00:15:24.369
Kay Firth-Butterfield: And so now let's look at Europe.

74
00:15:25.688 --> 00:15:52.130
Kay Firth-Butterfield: Obviously, you can tell from the accent I have to 1st look at the Uk, because we're not part of Europe anymore. And the Uk has the competition and markets authority or the Cma for short, it has existing laws on the books. But it is bringing into force the digital markets, Competition and Consumer Bill.

75
00:15:52.130 --> 00:16:07.919
Kay Firth-Butterfield: and that will allow the Cma to proceed in many ways to regulate AI. And so if you're trading with the Uk, you should have a look at that

76
00:16:08.070 --> 00:16:22.539
Kay Firth-Butterfield: with regard to the EU AI act. Of course you know that comes into force that came into force this week, the 10th of of June. So let's very quickly look at that.

77
00:16:24.290 --> 00:16:38.039
Kay Firth-Butterfield: It it has, it specifies various risk levels from unacceptable to minimal. And Mina's going to walk you through those with some examples shortly.

78
00:16:38.210 --> 00:16:39.010
Kay Firth-Butterfield: and

79
00:16:40.150 --> 00:16:41.360
Kay Firth-Butterfield: so

80
00:16:41.420 --> 00:16:47.040
Kay Firth-Butterfield: perhaps we can move to the next slide, which is product liability.

81
00:16:47.080 --> 00:17:09.679
Kay Firth-Butterfield: So I think that you know, if you don't know about the ua act. There's a lot of resources out there. You should obviously go and look at it. You should know where your product looks as if it fits in the in those risk categories. And Casper Labs has a good frequently asked questions section

82
00:17:09.680 --> 00:17:21.749
Kay Firth-Butterfield: on its website. But there are also 2 directives moving through Europe at the European system. At the moment that you should be aware of

83
00:17:22.140 --> 00:17:24.930
Kay Firth-Butterfield: the AI Liability Directive

84
00:17:25.478 --> 00:17:32.060
Kay Firth-Butterfield: applies to a person which is harmed by who is harmed by an AI system.

85
00:17:32.110 --> 00:17:52.509
Kay Firth-Butterfield: will enjoy the same level of protection as if they had been harmed by a different technology. It creates a rebuttable presumption of causality. So it throws the burden of proof onto the AI. The person you who created the AI system.

86
00:17:52.650 --> 00:18:00.220
Kay Firth-Butterfield: and also provides interesting evidential requirements around producing details of your software.

87
00:18:00.380 --> 00:18:15.190
Kay Firth-Butterfield: the new Product Liability Directive, which is also making its way through Europe. The Europe European decision making include includes software in its definition of a product

88
00:18:15.340 --> 00:18:17.480
Kay Firth-Butterfield: and thus uses

89
00:18:17.660 --> 00:18:21.170
Kay Firth-Butterfield: uses integrated software

90
00:18:21.350 --> 00:18:29.340
Kay Firth-Butterfield: into. So anybody integrating software into their products are potentially liable for harm.

91
00:18:29.870 --> 00:18:32.340
Kay Firth-Butterfield: So let's move on to the next

92
00:18:33.181 --> 00:18:54.880
Kay Firth-Butterfield: slide. And, as I said, the timeline for implementation started 3 days ago. So this is on your slate now, and, in fact, by December 2024, we will see some of that legislation coming into force.

93
00:18:55.303 --> 00:19:04.900
Kay Firth-Butterfield: If there's 1 slide if you trade with Europe or with Europeans that you should take a screenshot of. It's probably this one.

94
00:19:05.050 --> 00:19:11.109
Kay Firth-Butterfield: And just remember that some of these finds are very large

95
00:19:11.210 --> 00:19:24.209
Kay Firth-Butterfield: for the biggest transgressions we're looking at 35 million euros per transgression, or up to 7% of your annual turnover.

96
00:19:24.280 --> 00:19:28.560
Kay Firth-Butterfield: And then I think we'll go move on to some examples.

97
00:19:31.147 --> 00:19:53.850
Kay Firth-Butterfield: In human resources. It's a high risk. European AI act. Application. It is also a high risk. Your application under the Colorado law as is, for example, education, banking and healthcare.

98
00:19:54.494 --> 00:20:17.789
Kay Firth-Butterfield: and the Equal Employment Opportunity Commission here in the United States has started saying, we will bring cases against people regardless of whether they're using an algorithm that discriminates or a person that discriminates, and they are actually using

99
00:20:17.790 --> 00:20:24.139
Kay Firth-Butterfield: a very old law. The Civil Rights Act of 1,965 in doing so.

100
00:20:24.270 --> 00:20:54.140
Kay Firth-Butterfield: But you know, sometimes bias is necessary, and so I encourage you to to think this through by doing a little bit of homework. David Hardune and others. Published a paper in the Harvard Business Review, talking about how they who are who are working in banking in Asia want to make sure that women receive bank accounts. So banking the unbanked.

101
00:20:54.150 --> 00:21:00.730
Kay Firth-Butterfield: And of course you need bias in your model in order to find the women to bank.

102
00:21:01.200 --> 00:21:10.459
Kay Firth-Butterfield: and so it might be worth having a read of that, and then comparing it against some of the laws that are now on the books or coming on the books.

103
00:21:11.700 --> 00:21:13.480
Kay Firth-Butterfield: So healthcare

104
00:21:13.972 --> 00:21:21.809
Kay Firth-Butterfield: we are seeing obviously use of chat bots and AI and radiology, which is a very safe use.

105
00:21:22.326 --> 00:21:34.230
Kay Firth-Butterfield: So if you're in the healthcare space lots and lots of exciting applications. But also you need to be extremely careful, of course.

106
00:21:34.740 --> 00:21:45.319
Kay Firth-Butterfield: and in banking and financial services that Colorado Act applies to both healthcare and banking and financial services.

107
00:21:45.929 --> 00:22:05.629
Kay Firth-Butterfield: We're seeing Chatbots helping, for example, answering, taking calls that call centers would have have taken. That scene. They they are much less risky than have using AI to predict loans.

108
00:22:05.980 --> 00:22:10.209
Kay Firth-Butterfield: and so maybe to the next slide. Please.

109
00:22:10.590 --> 00:22:36.549
Kay Firth-Butterfield: deep fix. I just wanted to mention deep Fix, because we are seeing as well as having it. Ha! Having deep fakes in politics, revenge, porn, child abuse, and other places. We're seeing it come into businesses. And so a an employee of a bank in Hong Kong parties with 20 million dollars

110
00:22:37.030 --> 00:22:56.710
Kay Firth-Butterfield: as a result of being the only real person in a video conference where she believed that everybody else in the video conference were her Co employees, who, in fact, she knew but they were all deep fakes.

111
00:22:57.000 --> 00:23:04.680
Kay Firth-Butterfield: so really important to think about processes responsible. AI and processes for deep fix.

112
00:23:04.860 --> 00:23:08.989
Kay Firth-Butterfield: They're moving on to vulnerable people.

113
00:23:10.470 --> 00:23:14.580
Kay Firth-Butterfield: I just wanted to very quickly talk about smart toys.

114
00:23:15.104 --> 00:23:32.910
Kay Firth-Butterfield: Smart toys are AI enabled toys. There is very little control on where the data goes. Many of them use facial recognition, for example, and so if you are thinking of buying your child

115
00:23:32.910 --> 00:23:46.850
Kay Firth-Butterfield: a smart toy, really think very carefully about reading the books and finding out more about how AI is being used with those toys.

116
00:23:46.890 --> 00:23:55.790
Kay Firth-Butterfield: And the example. Here is my friend Kayla, which was actually banned in Germany as a spying device.

117
00:23:56.100 --> 00:24:13.019
Kay Firth-Butterfield: and so also thinking about older adults, as we are all coming to be cared for by AI. And how do we establish trust in the tools that we build amongst older adults.

118
00:24:13.670 --> 00:24:17.629
Kay Firth-Butterfield: So let's move to some takeaways on regulation.

119
00:24:18.137 --> 00:24:34.060
Kay Firth-Butterfield: Ai regulation. It's not an abstract possibility. It's actually a concrete rely real reality. And and you've we've seen that in some of the legislation I've been talking about.

120
00:24:34.420 --> 00:24:41.860
Kay Firth-Butterfield: every organization has always needed an AI Government strategy, but none so much as now.

121
00:24:42.050 --> 00:24:50.460
Kay Firth-Butterfield: And when we're thinking about governance, we really need to think about multiparty access

122
00:24:50.490 --> 00:25:03.700
Kay Firth-Butterfield: and an independent certification. And I think that that's a great segue for me to hand on to Minal. Who is the CEO of Casper Labs? Thank you.

123
00:25:14.450 --> 00:25:22.030
Mrinal Manohar: Thank you so much, Kate. That was incredibly enlightening, and I hope, everyone on the call feels similarly, can you all hear me loud and clear.

124
00:25:23.950 --> 00:25:24.720
Matt Coolidge: Yes.

125
00:25:25.770 --> 00:25:26.880
Mrinal Manohar: Okay, great.

126
00:25:27.190 --> 00:25:35.610
Mrinal Manohar: So KK. Covered a lot of ground, and especially about the need for multiparty access and independent certification.

127
00:25:35.650 --> 00:25:40.930
Mrinal Manohar: Moving on to the next slide. Let's talk about what that means in terms of

128
00:25:40.940 --> 00:25:44.849
Mrinal Manohar: what do you actually need to trust an underlying AI system?

129
00:25:45.020 --> 00:25:58.949
Mrinal Manohar: I think explainability and transparency are self self-evident, meaning. You should be able to easily understand why your AI model is doing what it does and easily interpret results. Similarly, you need underlying accountability.

130
00:25:59.380 --> 00:26:10.540
Mrinal Manohar: fairness. Of course you want equitability, and you want to make sure that opinions across a last swath of the population exists. But 2 areas. I really want to focus on. Here

131
00:26:10.750 --> 00:26:37.690
Mrinal Manohar: are adversarial robustness, which is a notion, you know, within all of these acts, which is, you need to prove that the underlying imprints that are tracking your AI system haven't been tampered with, and you also need to prove that your underlying AI system has not been tampered with. So think about it as a level of accountability approaching financial statements. But with the addition of tamper proofing

132
00:26:38.050 --> 00:26:44.199
Mrinal Manohar: additionally given that AI is multi-party data privacy issues really about.

133
00:26:44.220 --> 00:26:52.240
Mrinal Manohar: You need to make sure that access control is governed in a very, very rigorous manner, which is what we do via prove. AI,

134
00:26:52.280 --> 00:27:02.889
Mrinal Manohar: and that's also hyper enabled because of our blockchain technology, which gives both adversarial robustness as well as state of the art, curing complete data, privacy and access control.

135
00:27:03.020 --> 00:27:13.839
Mrinal Manohar: moving on to the next slide. Let's talk. As K. First, st Butterfield mentioned, there are varying levels of risk to these underlying AI models.

136
00:27:14.250 --> 00:27:25.740
Mrinal Manohar: and as a result, the regulation flexes. So let's talk about what those risk vectors are first, st and then I'll give some examples of certain use cases, and where they lie on that risk spectrum.

137
00:27:25.760 --> 00:27:36.839
Mrinal Manohar: So 1st of all, because of the lack of AI governance, there's a lot of opacity into what the training data actually was with a true tool, like proof. AI, you remove that opacity.

138
00:27:37.080 --> 00:27:48.670
Mrinal Manohar: Second, another risk. Fact vector, is that personally identifiable information, confidential information or copyright information might be sneaking its way into these underlying AI models.

139
00:27:49.030 --> 00:28:01.170
Mrinal Manohar: The combination of these things could result in unintended consequences of how the model performs, and thereby you have a lack of audit ability. All of these 4 issues are really solved by the proof AI system.

140
00:28:01.450 --> 00:28:15.099
Mrinal Manohar: But let's go to the right and talk about the risk. Spectrum going from no obligations and low and minimal risk all the way up to unacceptable risk. And I'll talk about the 2 ends of the spectrum first, st and it'll become clear why

141
00:28:15.590 --> 00:28:40.550
Mrinal Manohar: so at the bottom, where really there's no obligations for governance, these are used cases that have existed and are very, very deterministic. So an example here would be a spam filter in your email inbox, which you've had, you know, for over a decade now, or a thermost thermostat that maintains temperature. These are AI systems. But they're very, very simplistic, straightforward, and highly deterministic.

142
00:28:40.590 --> 00:28:42.940
Mrinal Manohar: And so there's no need for AI governments there.

143
00:28:43.030 --> 00:28:51.899
Mrinal Manohar: let's talk about unacceptable risks which are prohibited AI practices. So these are things that target children or protected categories, for example.

144
00:28:52.302 --> 00:29:06.620
Mrinal Manohar: You know, if you're targeting children or social ranking systems those of you who've seen Black Mirror. The episode nose dive is a good example of that. So that that's just a couple of examples of what our prohibited AI practices.

145
00:29:06.680 --> 00:29:08.830
Mrinal Manohar: So as you can glean

146
00:29:09.160 --> 00:29:26.859
Mrinal Manohar: everything that we consider to be AI. Automated robots, chat bots, llms, all of them fall right in the middle. That is high risk to medium risk applications meaning there are transparency obligations, and there's obligations to eliminate all the key AI risk collectors on the left.

147
00:29:27.100 --> 00:29:34.309
Mrinal Manohar: So I'd say, it's it's almost okay to ignore the 2 ends of the spectrum, because everything we engage with is really the middle chunk

148
00:29:34.690 --> 00:29:38.230
Mrinal Manohar: onto the next slide. And as a result of this.

149
00:29:40.160 --> 00:30:06.960
Mrinal Manohar: everyone is really concerned about how they're gonna manage this risk. And it's very important to be able to take on this risk with confidence. 90% of leaders recognize that their operating costs are going to go up to make sure that they're compliant with these AI standards. Honestly, net net. I think the cost is actually lower. Once you have an automated system that takes care of this similar to how we have financial audit systems today

150
00:30:07.160 --> 00:30:28.319
Mrinal Manohar: and well, over 70% in both cases feel that keeping up with these technical innovations, including AI is incredibly important and not keeping up, is a significant risk to the competitive competitiveness of a company. But in addition, they realize that they need to upgrade their risk management operating models.

151
00:30:28.840 --> 00:30:32.749
Mrinal Manohar: So onto the next slide. What does prove AI bring to the table?

152
00:30:32.890 --> 00:30:48.070
Mrinal Manohar: So Casper labs in collaboration with Ibm are really advancing AI governance. The problem is that enterprises today don't have an end to end tool that can rein in AI and increase transparency and auditability of these systems.

153
00:30:48.270 --> 00:31:08.880
Mrinal Manohar: So we're developing a comprehensive governance solution that delivers a highly secure data store, and you can also monitor, manage and share your underlying AI models and some of the characteristics that we bring, and some of them uniquely are first, st tamper proofing. The entire record on the public. Casper blockchain is completely tamper resistant.

154
00:31:09.460 --> 00:31:26.940
Mrinal Manohar: This also comes with hybrid functionality which allows data privacy, meaning. If there's certain data that needs to stay within your 4 walls, you can do that. Via hybrid instance. However, you get all the benefits of the public network, such as immutability and decentralized trust.

155
00:31:27.080 --> 00:31:31.369
Mrinal Manohar: As a result of this tamper proofing you have a fully auditable bill.

156
00:31:31.760 --> 00:31:38.199
Mrinal Manohar: and given every single element is being tracked. You have the best form of version control.

157
00:31:38.440 --> 00:31:55.380
Mrinal Manohar: If you think about AI systems today, when they start misbehaving, the general approach is to retrain these models, which is a incredibly expensive but B also nondeterministic. You're hoping that things will work out when oftentimes they do not.

158
00:31:56.190 --> 00:32:12.450
Mrinal Manohar: And finally, this requires multi-party attestation. The systems themselves are multi party, but you need to have them attested either by, you know, 3rd party auditor, avanta, a shellman, etc, and all of these come out of the box with, prove AI onto the next slide.

159
00:32:12.520 --> 00:32:38.879
Mrinal Manohar: just to give you an idea of the architecture. Before we move on to the next section is that is a completely open platform. There are 60 plus native connectors to ingest data, no matter where your data resides in a data warehouse and data. Lake Snowflake. Mongo doesn't matter it really is an open access platform and also native connectors to all the large AI model providers. So Ibm hugging face, Mistral meta, etc.

160
00:32:39.010 --> 00:32:53.019
Mrinal Manohar: and so prove AI integrates with what's in X governance, which is an Ibm product which has a legacy of open pages which has been Ibm's market leading Grc or governance risk and compliance product

161
00:32:53.050 --> 00:32:54.669
Mrinal Manohar: for several decades.

162
00:32:54.710 --> 00:33:00.360
Mrinal Manohar: and it works with a bunch of open source technologies, such as Red Hat openshift, and any cloud provider.

163
00:33:00.770 --> 00:33:21.639
Mrinal Manohar: With that I'd like to hand it over to Shawn Nag Rajan from Ibm. Who heads the AI governance practice for Ibm consulting. Given that, what's the next governance and what the what's the next platform is really integral to prove AI, we'd love to give an introduction to that platform as well over to you, Sean.

164
00:33:22.820 --> 00:33:32.719
Shyam Nagarajan: Thank you, Renal, and thank you, Kay. You set the stage really well, and this is, you know, just a prophet to talk about what governences a governance is.

165
00:33:33.780 --> 00:33:54.269
Shyam Nagarajan: Look, a governance is is complicated. It involves multi disciplinary stakeholders within within any organization all the way from risk and compliance office to Ciso, to business operations, to the Cda. The data Science office.

166
00:33:54.520 --> 00:33:58.379
Shyam Nagarajan: ultimately to procurement and as well as the infrastructure.

167
00:33:58.820 --> 00:34:18.759
Shyam Nagarajan: It also requires a lot of manual work, the ability to to estimate or identify those risks and then put controls in place to mitigate those risks, and then the ability to track the performance of all these different models in compliance to those risk and continuous evaluation, the proof of audits, and so on.

168
00:34:19.550 --> 00:34:41.718
Shyam Nagarajan: Today the Aa models are starting to really permeate in everything that organizations do, from models that's resident and different cloud platforms and hyperscalers, like azure or aws, or even on prem, or the ones that are coming into your organizations embedded as part of

169
00:34:42.643 --> 00:35:05.520
Shyam Nagarajan: commercially off the shelf software that's available or the ones that are popping up in your screen when you're when you're on zoom, or even in terms of outlook as co-pilots, it's everywhere. How do you put a structure and a and a framework in place that helps the

170
00:35:05.720 --> 00:35:13.719
Shyam Nagarajan: organization as as a whole, but also individuals. To properly, appropriately use this safely within the organization

171
00:35:13.900 --> 00:35:18.139
Shyam Nagarajan: governance solution is not one size that fits all.

172
00:35:18.490 --> 00:35:19.410
Shyam Nagarajan: and

173
00:35:19.560 --> 00:35:34.009
Shyam Nagarajan: a lack of appropriate tools for this multi-stakeholder collaboration and communication impacts the stakeholder management really really significantly. As the implementation goes on in our organization. Next shot, please.

174
00:35:38.370 --> 00:35:41.280
Shyam Nagarajan: When you are considering a government solution.

175
00:35:41.390 --> 00:35:49.140
Shyam Nagarajan: you need to. You need to have one that really caters to managing the risk across

176
00:35:49.250 --> 00:35:50.990
Shyam Nagarajan: across the enterprise.

177
00:35:52.010 --> 00:35:55.340
Shyam Nagarajan: Risk in governance requirements vary

178
00:35:55.600 --> 00:35:57.350
Shyam Nagarajan: by use, case

179
00:35:57.610 --> 00:35:59.230
Shyam Nagarajan: by industry.

180
00:35:59.410 --> 00:36:06.340
Shyam Nagarajan: by geography, jurisdictional and regulatory impact are very geography specific

181
00:36:06.650 --> 00:36:08.060
Shyam Nagarajan: by company.

182
00:36:08.070 --> 00:36:09.640
Shyam Nagarajan: and as well as

183
00:36:09.670 --> 00:36:11.740
Shyam Nagarajan: what technology is used.

184
00:36:11.770 --> 00:36:22.329
Shyam Nagarajan: and especially if if it is traditional. Ml. Models versus genia, M. Models, the risks are different and the controls and the guardrails will be different.

185
00:36:22.680 --> 00:36:42.160
Shyam Nagarajan: So your governance solution needs to adjust to your specific situation and primarily layered by use, case, industry, geography, and the likes. It needs to have the capability to do risk assessments. Governance workflows across multi stakeholders within the organization, a single

186
00:36:42.770 --> 00:36:47.499
Shyam Nagarajan: point of view through dashboards that gives you a risk, exposure.

187
00:36:48.890 --> 00:37:09.030
Shyam Nagarajan: the ability to capture model metadata and documentation for auditability and transparency purposes and monitoring the metrics, especially with respect to accuracy, drift, bias, fairness, discrimination, hap, and so on. So forth. Next shot, please.

188
00:37:10.930 --> 00:37:16.860
Shyam Nagarajan: What's an extra governance? Is a unified, integrated AI governance platform

189
00:37:17.050 --> 00:37:22.689
Shyam Nagarajan: to govern Gen. AI. And as well as predictive Ml. Models that's

190
00:37:22.810 --> 00:37:33.380
Shyam Nagarajan: running on any service provider, including azure aws on-prem, and as well as on Ibm cloud, or what's the next platforms?

191
00:37:33.850 --> 00:37:50.760
Shyam Nagarajan: It does 3 key things that are really critical to all our organizations, one compliance, the ability to show that you're adopting AI technology within the organization as suggested and recommended by the regulators.

192
00:37:51.060 --> 00:38:04.370
Shyam Nagarajan: 2 proactively detect and mitigate risk risk in which broadly categorizes into 3 buckets, repetitional risk organizational risk

193
00:38:04.410 --> 00:38:06.089
Shyam Nagarajan: and regulatory risk.

194
00:38:06.970 --> 00:38:21.320
Shyam Nagarajan: And, lastly, life cycle, governance of models all the way from being able to manage monitor and govern them from design, build, deploy and operate perspective.

195
00:38:21.560 --> 00:38:44.940
Shyam Nagarajan: What's an extra governance as a single comprehensive tool that allows you to to do a end to end life cycle, governance, and is an open platform that allows you to to manage models anywhere running within the organization, but also integrate with your existing tooling that you may, you may operate for Grc purposes and

196
00:38:45.660 --> 00:38:54.690
Shyam Nagarajan: generate and manage automated model documentation for evaluation and as well as audit purposes. Next, chat, please.

197
00:38:57.900 --> 00:39:26.849
Shyam Nagarajan: This is a a deeper view into the same, and what you see on the screen on the left hand side is really a quick dashboard view of the risk poster exposure within the organization. With respect to AI, the one in the middle is all about drift metrics. It could be drift from the data or the feature or model all of that aligned with monitoring metrics for your model.

198
00:39:26.860 --> 00:39:43.540
Shyam Nagarajan: And last, one is about the AI fact sheets which you use to evaluate, whether the model is ready and accurate enough for you to move from dev to test, to validate, to actual operator product production, productive use next chat, please.

199
00:39:45.250 --> 00:39:46.080
Shyam Nagarajan: So

200
00:39:47.700 --> 00:40:05.839
Shyam Nagarajan: you know. What's the next step? Governance allows organization to put a fully automated framework for managing all the way from model inventory to model risk governance through continuous evaluation and monitoring for for guardrails.

201
00:40:06.227 --> 00:40:27.910
Shyam Nagarajan: You are you? You could both and develop both, and deploy on any of the hyper scalers or service providers, or even on Prem, but it's a it's a automated tooling framework that a single single intern environment that allows you to to manage risks and take actions within the organization next shot, please.

202
00:40:30.220 --> 00:40:43.229
Shyam Nagarajan: So here's a quick snapshot of its further capabilities. It has the ability to do risk evaluations. Aligned with the nest framework or act.

203
00:40:43.230 --> 00:41:04.574
Shyam Nagarajan: or you recently we just announced partnership with Credo AI, where you could bring in their policy packs. That are available for specific regulations and include that as part of our our framework for evaluation of risk categorization of risk, and as actually coming up with the the guardrails. That's necessary.

204
00:41:05.612 --> 00:41:07.300
Shyam Nagarajan: Next next page, please.

205
00:41:09.100 --> 00:41:11.530
Shyam Nagarajan: And this is a quick view of

206
00:41:12.057 --> 00:41:22.090
Shyam Nagarajan: the guardrails that's been determined, based on the risk assessment, and to monitor the metrics for breach status and incident management

207
00:41:22.360 --> 00:41:29.160
Shyam Nagarajan: of the model runtime metrics of the of the models. You also see

208
00:41:29.713 --> 00:41:44.229
Shyam Nagarajan: a workflow that's absolutely necessary for onboarding new models or onboarding of new use cases and evaluation of those for acceptance in a multi disciplinary team. That's that

209
00:41:44.290 --> 00:41:46.819
Shyam Nagarajan: manages the governance within the organization

210
00:41:46.850 --> 00:41:48.210
Shyam Nagarajan: next chat, please.

211
00:41:49.500 --> 00:41:50.380
Shyam Nagarajan: So

212
00:41:52.010 --> 00:41:53.760
Shyam Nagarajan: what I would like to do is

213
00:41:53.980 --> 00:41:56.050
Shyam Nagarajan: is quickly

214
00:41:56.110 --> 00:42:05.450
Shyam Nagarajan: summarize. What's an extra governance is a platform that allows a conference of platform that allows organizations to manage, risk

215
00:42:06.075 --> 00:42:11.989
Shyam Nagarajan: create controls and eventually monitor the performance of all models end to end

216
00:42:12.150 --> 00:42:20.260
Shyam Nagarajan: with partnership with Casper labs. We are now extending that capability to be able to do

217
00:42:20.916 --> 00:42:43.659
Shyam Nagarajan: tamper proof auditable version control and a multi stakeholder attestation and management that is absolutely necessary for building models and deploying ma models across real life enterprise level use cases. So I welcome Renell back to talk about proof. AI.

218
00:42:45.560 --> 00:42:46.647
Mrinal Manohar: Thank you very much.

219
00:42:47.681 --> 00:42:58.280
Mrinal Manohar: as Shawn mentioned, you know the what's an X governance platform has a a raft of capabilities. Let's talk about some of the differentiators on prove AI, and why they're important.

220
00:42:59.330 --> 00:43:06.980
Mrinal Manohar: 1st off, when we talk about multiparty access and accountability. One thing to really understand about the AI Tech stack

221
00:43:07.010 --> 00:43:20.010
Mrinal Manohar: is that it's been a multi-party system, right from the Gpt boom, meaning your source for models, your source for data your source for who's actually requisitioning the model? And who's taking a legal risk

222
00:43:20.010 --> 00:43:37.540
Mrinal Manohar: can all be independent parties. But that doesn't indemnify you of the risk, just because you're using a 3rd party model or 3rd party data. It doesn't indemnify you of the risk. AI governance is your responsibility, irrespective of how many parties are being touched and the prove AI solution.

223
00:43:37.790 --> 00:43:46.690
Mrinal Manohar: Given that blockchains are inherently turing, complete and have very sophisticated access and data privacy control. We're able to offer that

224
00:43:47.150 --> 00:43:48.400
Mrinal Manohar: on top of that

225
00:43:48.420 --> 00:44:02.250
Mrinal Manohar: we give tamper proof, audit ability. So this is a the most secure, but B also the most cost effective way to have a fully auditable AI stack, which is compliant with all the acts that are coming up.

226
00:44:02.420 --> 00:44:11.180
Mrinal Manohar: and as a result of tracking and tracing everything in the tamper proof manner across multiple parties. You have version control that you can trust in.

227
00:44:11.400 --> 00:44:16.030
Mrinal Manohar: So prove AI gives the best form of version control in the industry

228
00:44:16.270 --> 00:44:17.690
Mrinal Manohar: onto the next slide.

229
00:44:21.330 --> 00:44:22.260
Mrinal Manohar: So

230
00:44:23.080 --> 00:44:30.869
Mrinal Manohar: by derisking AI, we're actually enabling companies to build with confidence. And before I go into the specifics here.

231
00:44:31.360 --> 00:44:36.289
Mrinal Manohar: similar to the automotive analogy that K. 1st Butterfield used

232
00:44:36.740 --> 00:44:53.469
Mrinal Manohar: people are likely to drive a lot faster if they have seat belts on. And really what we're providing here are those seat belts. Many people say things like, Oh, regulation might slow this industry down, or it might lead to innovation being slower.

233
00:44:53.700 --> 00:44:55.429
Mrinal Manohar: Actually, it's a contrary.

234
00:44:55.490 --> 00:45:09.720
Mrinal Manohar: If you're able to govern and run AI models and quickly fix them when they're going off the rails. In fact, innovation will happen much faster. And the reason our our platform really enables. That is one. It's fully open.

235
00:45:09.920 --> 00:45:13.180
Mrinal Manohar: So you can use any cloud, any model, any data.

236
00:45:13.260 --> 00:45:18.340
Mrinal Manohar: it's fully auditable. So if something goes wrong or you need to interface with the Regulators.

237
00:45:18.809 --> 00:45:23.530
Mrinal Manohar: You have this tamper proof data store, which is reconciling everything.

238
00:45:23.860 --> 00:45:36.650
Mrinal Manohar: It's private, which means you have impenetrable version management, and you have enhanced data management. And then, finally, it's highly secure, because every single access point is cryptographically secure

239
00:45:36.690 --> 00:45:38.220
Mrinal Manohar: on to the next slide.

240
00:45:40.740 --> 00:45:46.800
Mrinal Manohar: So this is just prove AI at a glance. On the on the right corner you can see what the Ui ux looks like.

241
00:45:46.870 --> 00:45:56.660
Mrinal Manohar: but essentially it has several components that allow you to do everything right from the data, pipeline and model selection all the way through deployment and integration.

242
00:45:56.770 --> 00:46:02.919
Mrinal Manohar: So it's a truly tamper-proof serialized, automated source of truth for AI systems.

243
00:46:02.970 --> 00:46:14.199
Mrinal Manohar: 2 things to note here are, of course, this works across multiple parties, but additionally to that, it allows 3rd party access for attestation.

244
00:46:14.270 --> 00:46:31.010
Mrinal Manohar: So if you need your accounting firm, or Avanta, or a shellman, to attest that the underlying trace of your AI systems has really not been tampered with, and is and is compliant with current specifications and regulations. You're able to do so in a highly automated manner.

245
00:46:31.180 --> 00:46:35.629
Mrinal Manohar: Let's talk about a specific use case. And our 1st design partner on the next slide.

246
00:46:36.700 --> 00:46:41.750
Mrinal Manohar: So case study of what we're building is for a company called Grayscale AI.

247
00:46:41.850 --> 00:47:03.170
Mrinal Manohar: So let me describe what they do and what the challenges they're facing are gray scale AI manufactures, hardware and also AI driven software on top of it in the food inspection space. So think about chicken, nuggets, chocolate bars, French fries, etc. They go through a machine, and the machine detects for heavy metals rot, etc.

248
00:47:03.180 --> 00:47:08.150
Mrinal Manohar: so as you can imagine. This is a complex, highly variable food safety issue.

249
00:47:08.610 --> 00:47:21.000
Mrinal Manohar: And there's lots of varying results, because you have to keep calibrating the machine to eliminate both pulse, false positives, but also false negatives. And you need to be able to certify this inspection.

250
00:47:21.380 --> 00:47:28.000
Mrinal Manohar: So what does prove AI do? It gives a completely tamper proof serialized and automated data store.

251
00:47:28.600 --> 00:47:42.609
Mrinal Manohar: All multiple parties have access to this. So you know, it could be gray scale AI themselves, but also they could enable, you know a client say, for example, just off the top of my head, a nestle or a Tyson foods can also look at this trace.

252
00:47:42.620 --> 00:47:52.510
Mrinal Manohar: and this is done with a hybrid enterprise. Blockchain back end, which ensures that you get immutable proofs from the public ledger which cannot be corrected

253
00:47:52.890 --> 00:48:09.220
Mrinal Manohar: or modified as a result. The benefit to the end user is that they know that the results of the food inspection are actually thorough and true, and that every single piece of food went through this monitoring system.

254
00:48:09.340 --> 00:48:14.980
Mrinal Manohar: Second, they're confident that the AI model is being highly governed

255
00:48:15.280 --> 00:48:28.770
Mrinal Manohar: and audited, such that false negatives and false positives don't come about, and the food manufacturers themselves have the huge benefit that they can indemnify that every single food item has gone through this inspection process

256
00:48:29.070 --> 00:48:48.459
Mrinal Manohar: as a result of all of these systems, we actually drastically lower the cost of examining these packaged goods, using AI to enable this food inspection, but also having an end to end trace, which is net net more cost effective than not using an AI governance tool.

257
00:48:48.560 --> 00:48:51.570
Mrinal Manohar: So this is just one example in practice.

258
00:48:51.740 --> 00:49:07.789
Mrinal Manohar: With that we've come to the end of the prepared remarks and we'll hand it over to QA. We are gonna put up a link here where you know we're actively working with design partners. And if any of you are having issues or want to consult

259
00:49:07.790 --> 00:49:24.090
Mrinal Manohar: on how to access great AI governance, please. Let's go to the next slide. Please. Scan this QR code reach out to us, and we'd love to welcome you as a design partner for prove AI, which is being built in collaboration with Ibm.

260
00:49:24.500 --> 00:49:30.669
Mrinal Manohar: Thank you everyone for listening through those prepared remarks, and I'll hand it back to Matt.

261
00:49:31.287 --> 00:49:33.730
Mrinal Manohar: Curate, the Q, and a section.

262
00:49:34.830 --> 00:50:02.280
Matt Coolidge: Thank you, Monal, and thank you, Shawn and Kay, for a really thoughtful walk through here we are getting a lot of questions. We're gonna do our best to get through as many of them as possible over the next 10 min here. For those of you who have questions that were not answered. There will be a follow up email that goes out at the conclusion of this webinar feel free to reply directly to that. It is monitored, and we will connect you with the right party to get answers to any unanswered questions with that manal. I'll direct this 1st one to you, though I think Shawn and and Kay may want to also augment it.

263
00:50:02.823 --> 00:50:10.470
Matt Coolidge: Can you give some guidance on how I can build a business case and need for AI governance in the enterprise. How do I get buy in from executives?

264
00:50:12.040 --> 00:50:12.990
Mrinal Manohar: You know. So

265
00:50:13.320 --> 00:50:18.159
Mrinal Manohar: 1st of all, the the real business case is this, this is going to be required

266
00:50:18.330 --> 00:50:38.680
Mrinal Manohar: right? If if you, if you look at the risk spectrum, you know. At the bottom you had things like, you know, thermostats. And at the back you know things that target children that are out of bounds. Well, the things that target children are just banned. But almost any AI application is right in the middle of that risk spectrum, which means that you are beholden to that law.

267
00:50:38.780 --> 00:50:42.489
Mrinal Manohar: Now, the business case is really twofold on how to get buy-in

268
00:50:42.520 --> 00:51:07.139
Mrinal Manohar: one. It is actually incredibly cost effective to have AI governance because you are no longer governing and training your AI models in a brute force manner. You're doing it in a highly tracked, highly auditable manner. And so you will wait. You'll waste, much less processing time in training your models, and God forbid! Something goes wrong. Your cost diversion control

269
00:51:07.647 --> 00:51:15.770
Mrinal Manohar: goes down significantly. It's also a huge cost saving in terms of being compliant with regulation.

270
00:51:15.770 --> 00:51:42.180
Mrinal Manohar: since it's all automated. Once it's set up, it basically runs in the background, making sure that everything is being traced. As a result, your cost of audit has now gone down significantly. The way to think about it is, if you have a great general ledger system upfront for your financial records, your net cost of producing audited financial output is actually significantly lower. So the Ca, and finally.

271
00:51:42.420 --> 00:51:47.499
Mrinal Manohar: it's really that businesses now feel confident to innovate.

272
00:51:47.650 --> 00:52:00.380
Mrinal Manohar: If you have an AI governance tool, you have the confidence that as long as you set up your triggers appropriately. If things are behaving out of bounds, you will be notified and can take appropriate action.

273
00:52:01.113 --> 00:52:06.939
Mrinal Manohar: I'm happy to hand it over to Shawn and Kate to augment if there was anything I missed, which I'm sure I did.

274
00:52:08.926 --> 00:52:13.520
Kay Firth-Butterfield: I'm happy to hand over, because I don't think you did miss anything.

275
00:52:14.590 --> 00:52:18.890
Shyam Nagarajan: You. You did a good job. Renell look enterprises

276
00:52:19.110 --> 00:52:30.750
Shyam Nagarajan: are being forced to show compliance to the new regulations, so they don't have a choice. If they don't comply, there's going to be a cost of penalty. That itself is a

277
00:52:31.050 --> 00:52:55.980
Shyam Nagarajan: roi or a business case, but on top of it. We're also starting to see without governance. The organization is exposing to significant reputational, organizational operational and as well as regulatory risk, so that all of that comes together and makes it a pretty damn good case for organization to consider governance.

278
00:52:57.360 --> 00:53:07.089
Matt Coolidge: Wonderful. Okay. This one, I think, is best suited for you. Are there differences and nuances between responsible AI for generative AI versus quote, unquote, regular AI.

279
00:53:08.662 --> 00:53:15.900
Kay Firth-Butterfield: Yes, because of the bias issues and the problems with the large language models. But

280
00:53:15.930 --> 00:53:34.200
Kay Firth-Butterfield: if you're looking for guidance when you're thinking about responsible AI all the things that we wrote about responsible AI before it was generative. AI are equally useful now as they were then.

281
00:53:36.080 --> 00:53:36.890
Matt Coolidge: Wonderful

282
00:53:37.377 --> 00:53:44.542
Matt Coolidge: tracking through here, Manall Shawn. I think this is probably a good question for for both of you, Renell. Maybe you take the 1st part and Shawn the second.

283
00:53:44.900 --> 00:54:02.459
Matt Coolidge: We need to capture 3rd party relationships with vendors and their sub processors. Is this something that prove AI can do? That's part one, and then part 2 is, can this solution, which I believe would be what's next? Dot governance plus prove AI also help with data, protection, compliance EU and uk gdpr.

284
00:54:04.260 --> 00:54:08.699
Mrinal Manohar: Alright. I'll go for the 1st part, and then I'll hand it over to Shawn. So the 1st part is

285
00:54:08.740 --> 00:54:13.339
Mrinal Manohar: about collecting data from 3rd parties, etc. The answer is

286
00:54:13.500 --> 00:54:15.770
Mrinal Manohar: an unequivocal yes.

287
00:54:15.860 --> 00:54:18.999
Mrinal Manohar: the product was built from the ground up

288
00:54:19.290 --> 00:54:24.139
Mrinal Manohar: to be a multi-party reconciliation and multi-party access, model.

289
00:54:24.770 --> 00:54:34.230
Mrinal Manohar: and that's, you know, that that really comes out of the way in which the AI Tech stack has developed. So for the 1st part, yes second part over to you, Shawn.

290
00:54:36.380 --> 00:54:42.234
Shyam Nagarajan: So multi-party evaluation, especially on

291
00:54:43.930 --> 00:54:45.043
Shyam Nagarajan: you know.

292
00:54:45.740 --> 00:55:14.855
Shyam Nagarajan: today, all off the shelf software comes in built in with AI as part of it. You need to have the right onboarding model that assesses the risks, gets what you call this data, check compliance cards from these vendors, and then to be able to evaluate if those those are within acceptable thresholds and eventually agree to usage what level of usage and

293
00:55:15.290 --> 00:55:37.429
Shyam Nagarajan: risks data risks that it has within the organization. All of this can be done through prove AI, and what's the next governance? And there is a very well documented workflow on the process. How you go about doing this. So this is this is one of the the most popular use cases. Why, why this governance structure is put in place.

294
00:55:39.250 --> 00:55:45.410
Matt Coolidge: Hey? Question for you. Do you have added recommendations regarding diverse representation, including gender balance.

295
00:55:48.139 --> 00:56:04.330
Kay Firth-Butterfield: Well, yes, lots say, let me let me start with. When you when you start thinking about production of an algorithm. Then you should be thinking about having a diverse team.

296
00:56:04.973 --> 00:56:13.709
Kay Firth-Butterfield: Why? Well, obviously, you introduce bias at through the developers in the algorithm.

297
00:56:13.770 --> 00:56:35.420
Kay Firth-Butterfield: And those developers tend to be young men under 30. And so if you haven't got enough a women scientists in your in your company. Then you really need to be saying, Well, how can I introduce women into this thought process?

298
00:56:35.835 --> 00:56:53.550
Kay Firth-Butterfield: You know, I talked about product or people persons of color. I talked about product and product liability in the U. EU AI act sorry in the EU, and it does seem to me that what we have lost is seeing software as a product.

299
00:56:53.760 --> 00:57:10.980
Kay Firth-Butterfield: And if you, for example, wanted to sell a product to an old white woman like me, you can't see me. You'll have to take it. on trust that that's me, then. You wouldn't

300
00:57:11.020 --> 00:57:27.599
Kay Firth-Butterfield: decide. You wouldn't have as the person who develops the product and thinks about the product just men. And yet that's what we're tending to default to with with AI.

301
00:57:27.610 --> 00:57:30.000
Kay Firth-Butterfield: So really

302
00:57:30.160 --> 00:57:38.229
Kay Firth-Butterfield: thinking about that diversity throughout the process will actually come up with better tools and better products for you.

303
00:57:40.030 --> 00:57:48.702
Matt Coolidge: Great. We have time, I'd say, for about 3 more questions. So again doing our best to get through all of them. Please know that we will be following up with an email. Respond, we will get back to you for any unanswered questions.

304
00:57:48.950 --> 00:57:55.479
Matt Coolidge: We're all given the requirements and constraints for responsibility versus explainability? Are there inherent tradeoffs involved here.

305
00:57:57.533 --> 00:58:02.530
Mrinal Manohar: No, I don't think there's an inherent trade off, because I think they're actually complementary.

306
00:58:02.610 --> 00:58:21.059
Mrinal Manohar: The more explainable your underlying AI build is, the more responsible here acting. So really, the core feature of using AI responsibly is that you have an audit path, a trace of it. So I don't. I don't view those 2 things as orthogonal.

307
00:58:21.230 --> 00:58:26.651
Mrinal Manohar: like having having as much transparency and audit ability is really required to be to be

308
00:58:27.040 --> 00:58:28.370
Mrinal Manohar: responsible.

309
00:58:30.180 --> 00:58:34.720
Matt Coolidge: Hey? One more for you is the Uk. Bringing its own legislation on AI.

310
00:58:35.882 --> 00:58:55.779
Kay Firth-Butterfield: It has done it. It. It's already world leading in terms of the deep fix. Say, the online Homs act. It's not at the moment looking at legislation. Apart from the legislation I talked about, which will give the Cma more powers.

311
00:58:56.358 --> 00:59:04.499
Kay Firth-Butterfield: And of course it has been promoting the safety issues that came out of Bletchley Park.

312
00:59:06.610 --> 00:59:16.019
Matt Coolidge: Wonderful final question here, Monal, for you. Will there be a live demo of prove AI to better see and understand how it's working, and how the Casper blockchain will be used.

313
00:59:17.732 --> 00:59:26.520
Mrinal Manohar: Yes. So the product will launch in Q. 3, you know, along the way we will share Demos. We will share more with the community. If someone wants to look

314
00:59:27.998 --> 00:59:34.271
Mrinal Manohar: call it a proof of concept demo which actually is live running code. And you know,

315
00:59:34.650 --> 00:59:36.840
Mrinal Manohar: pings a public blockchain.

316
00:59:37.349 --> 00:59:45.160
Mrinal Manohar: There's a Gartner Webinar, if you just if you just Google, Gartner, AI Webinar Casper labs, you'll find a link to it.

317
00:59:45.300 --> 00:59:48.130
Mrinal Manohar: And around minute, 25 or 26

318
00:59:48.150 --> 00:59:52.610
Mrinal Manohar: of that webinar. A video starts where we actually show a demo.

319
00:59:52.620 --> 01:00:02.780
Mrinal Manohar: So there's 1 that you can already see that was built back in November. We've been actively coding since then, and so lot lots more to show you as we approach launch in Q 3.

320
01:00:03.710 --> 01:00:24.079
Matt Coolidge: Excellent all, and thank you, everybody. I know we are at the top of the hour and at time here. Thank you again to everybody who joined one more QR. Code for those still here. If you're kind enough to take about 2 min, we would love your feedback on today's webinar. We will follow up with this via email as well. I'll give a minute for anybody who wants to scan this, but with that we will conclude the webinar.

321
01:00:26.460 --> 01:00:28.339
Matt Coolidge: and thank you everybody again for joining.

322
01:00:30.460 --> 01:00:31.670
Mrinal Manohar: Thanks. Everyone.

323
01:00:31.670 --> 01:00:32.809
Kay Firth-Butterfield: Thank you.

324
01:00:32.970 --> 01:00:33.359
Shyam Nagarajan: You.

Invaluable insights and strategies from industry leaders.

Hosted by internationally renowned AI policy expert and TIME Impact Honouree, Kay Firth-Butterfield, this event features Casper Labs CEO Mrinal Manohar and IBM Consulting’s Executive Partner Shyam Nagarajan.

This exclusive AI Governance webinar not only features leading advisor in AI and regulatory affairs who shares insights on the EU AI Regulatory Act, but provides invaluable insights and strategies from AI policy leaders with a variety of expertise.

Join Kay, Mrinal and Shyam as they tackle pressing compliance challenges of the EU AI Act and AI Accountability Act, provide a robust framework for risk management and invaluable best practices.

Download Webinar Slides