While individual panels covered a wide range of topics, one common thread united all of them: AI – and generative AI, in particular – has tremendous potential to upend current societal structures. Whether that is a net positive or not rests on the decisions we collectively make in the near term.
Filecoin Foundation Chairwoman Marta Belcher noted the importance of thinking about AI as the piece of a larger puzzle. Rather than simply adopting AI for AI’s sake, she pointed out, organizations should think critically about convergence opportunities that democratize access to data.
The afternoon continued with a panel that examined the private sector’s role in realizing responsible AI standards, featuring executives from Microsoft, Google Cloud, Filecoin and the ETH AI Center. The theme of the conversation inevitably gravitated towards the importance of public-private collaboration, and the need to put clearer governance standards in place.
Currently, as panelists noted, there are too many competing frameworks – including the proposed AI Bill of Rights in the US and the EU’s AI Act. More legislative clarity is needed for the private sector to be able to align on the right set of standards that appropriately balance AI innovation with data privacy and security considerations. And patience will still be required — the largest AI firms today, including the likes of Google, Microsoft and IBM tend to move at more deliberate paces and must adhere to a clear set of (as yet undefined) standards. The answer, as Filecoin Foundation’s Clara Tsao pointed out, rests in public - private sector collaboration.
Leaders from Microsoft, Filecoin Foundation, Google Cloud and ETH AI Center discuss the importance of responsible AI standards at The Hub.
Later in the afternoon, IBM’s Shyam Nagarajan joined Casper Labs CEO Mrinal Manohar on stage to preview the AI governance solution being jointly developed by the two companies. Generative AI, while tantalizing in its potential, introduces unprecedented amounts of third-party risk to firms. As AI regulation continues to evolve, it’s increasingly critical for organizations to be able to transparently audit the training data that feeds AI – as well as maintain the ability to revert to previous versions when performance issues occur. It’s a critical feature that’s not currently possible today – and an issue which IBM and Casper Labs are directly addressing with their solution, expected in market in Q3 2024.
Blockchain’s convergence with AI was another recurring theme; the complementary nature between both technologies was widely recognized throughout the day. What’s especially noteworthy about AI is the fact that, unlike legacy technologies, there’s no incumbent technology stack. That makes it the perfect application for blockchain, which can apply unprecedented economic and security benefits to AI deployments (think: a more effective way to see through AI’s black box in a tamper-proof manner, and a more cost-effective way to evolve beyond AI’s “Brute Force” era).
Another trend that warrants mention here is the notion of bias in large language models (LLMs). For all the talk about AI’s potential, and the need to better understand the data informing AI system outputs, we’ve given precious little consideration to the actual data that’s feeding AI systems. In this context, it’s less a matter of whether or not it’s copyrighted data (a pressing issue in its own right), but rather the source of the data. If AI systems are meant to be truly open and democratic, as their proponents tout, it’s critical that they draw upon a diverse series of reference data.
Next Up at The Hub
Tomorrow’s focus is “Blockchain in Action” it will feature a series of panels of leading blockchain firms talking about how they’re driving cost-effective and secure outcomes for modern businesses. As always, please follow the Casper Labs YouTube page to view the latest panels from The Hub.