Responsible Generative AI

Great Power,
Great Responsibility

With the rise of Generative AI, more than ever before, organizations need to think about building AI systems in a responsible and governed manner.

This ebook unpacks the risks of Generative AI and introduces the RAFT (Reliable, Accountable, Fair, and Transparent) framework for Responsible AI, showing how it can be applied to both traditional and Generative AI systems.


3D Cover Build Responsible Generative AI Applications

Read Now

Introducing the RAFT Framework

Making AI — Including Generative AI — Responsible

Dataiku’s baseline approach to Responsible AI is called RAFT for Reliable, Accountable, Fair, and Transparent. The values outlined in the RAFT framework are crucial for the development of AI and analytics, and they cover both traditional and new methods in Generative AI. 

To effectively execute on these principles requires understanding the potential risks and impacts of technology. In this ebook,  we will cover the specific risks of Generative AI and broader approaches to Responsible AI practices.  You will also find a full version of the RAFT framework ready for adaption and use at your organization.


Risks of Generative AI

Mitigate the Challenges of Enterprise Generative AI

The following risks are common across various types of Generative AI technology but will surface in different ways across use cases: toxicity, polarity, discrimination, human-computer interactions, disinformation, data privacy, model security, and copyright infringements. 

The potential harms listed here are not exclusive to language models, but they are heightened by the use of natural language processing (NLP) techniques to analyze, categorize, or generate text in a variety of business contexts.

Understanding and addressing these risks before implementing an LLM or other Generative AI techniques into an AI system is crucial to ensure the responsible and governed use of the latest technology — learn how in this ebook.

Assessing Potential Impacts

2 Dimensions to Understand Generative AI Impact

  1. Whether the risk could materialize as a harm to individuals and groups directly because of the solution’s implementation or indirectly because of some constellation of factors that are difficult to qualify at the time of deployment.
  2. Whether the risk could materialize as a harm immediately or over a longer period of time. 
Read the full ebook to see how these dimensions look in practice for real-life use cases and how the RAFT framework should be applied accordingly.
people using dataiku in kitchen