With the rise of Generative AI, more than ever before, organizations need to think about building AI systems in a responsible and governed manner.
This ebook unpacks the risks of Generative AI and introduces the RAFT (Reliable, Accountable, Fair, and Transparent) framework for Responsible AI, showing how it can be applied to both traditional and Generative AI systems.
Dataiku’s baseline approach to Responsible AI is called RAFT for Reliable, Accountable, Fair, and Transparent. The values outlined in the RAFT framework are crucial for the development of AI and analytics, and they cover both traditional and new methods in Generative AI.
To effectively execute on these principles requires understanding the potential risks and impacts of technology. In this ebook, we will cover the specific risks of Generative AI and broader approaches to Responsible AI practices. You will also find a full version of the RAFT framework ready for adaption and use at your organization.
The following risks are common across various types of Generative AI technology but will surface in different ways across use cases: toxicity, polarity, discrimination, human-computer interactions, disinformation, data privacy, model security, and copyright infringements.
The potential harms listed here are not exclusive to language models, but they are heightened by the use of natural language processing (NLP) techniques to analyze, categorize, or generate text in a variety of business contexts.
Understanding and addressing these risks before implementing an LLM or other Generative AI techniques into an AI system is crucial to ensure the responsible and governed use of the latest technology — learn how in this ebook.