After a very fast start for GenAI, many organizations are racing to get AI models and applications into production. While expectations are on the rise, AI adoption is stalling. Many ​​​​organizations are facing similar innovation challenges, like gaps in technical and operating capabilities and challenges in aligning business objectives and technological innovation with governance and controls.

To that end, Slalom hosted a Responsible AI Executive Workshop on July 23 to help our clients and their clients connect with peers and expand their understanding of Responsible AI. Read the event recap below to learn more.

Event Recap

 

Responsible AI Landscape Overview: Our experts discussed our organization’s take on responsible AI, covering unintended outcomes, regulations, information to action, privacy, and ethical considerations. They also provided an overview of Slalom’s responsible AI checklist and compliance considerations.

 

Panel: From Principles to Practice: The panelists reflected below shared their personal experiences regarding how organizations can identify responsible AI (RAI) principles and implement RAI tools and practices throughout the value chain. They also addressed overcoming barriers and RAI blockers and discussed frameworks other organizations can utilize.

 

Chatbot on Rails: This session highlighted the integration of Salesforce and NVIDIA guardrails in conversational AI agents to manage sensitive topics and align with NIST guidelines. Through practical examples, such as preventing congratulatory messages on healthcare outcomes and masking personal information, our experts demonstrated how guardrails can be used to manage privacy, security, and contribute to responsible AI practices.

 

Soul Machines: Our experts provided a demo of machines powered by a “digital brain” that can interact with humans with empathy and compassion. This demo specifically touched on the ethical and regulatory implications of using digital humans, emphasizing the need for careful consideration of how they are integrated into technology solutions.

 

AI Red Teaming: Addressing biases in AI goes beyond just the technical aspects—it requires a deep understanding of the sociotechnical landscape. Our experts walked through an experiment where we focus on a general use gen-AI chatbot and the potential for downstream impact and harm by changing the name of a single employee and assessing model outputs, observing different outcomes, and highlighting the critical need for comprehensive red teaming in diverse contexts.