A Legal and Ethical Framework For Gen AI
In June, the National Eating Disorders Association decided to shut down an AI-powered chatbot called Tessa. “She” was supposed to serve as a 24-hour counselor for people dealing with eating disorders. Instead, psychologists and activists found Tessa was offering dieting tips.
In the me-first race to adopt AI and push new products out into the world, there are headlines everywhere that freak us out. Some 1,400 leading technologists are imploring companies in this technology arms race to pause. Sam Altman, CEO of ChatGPT maker OpenAI, has recommended the formation of a global consortium on the level of the International Atomic Energy Agency (IAEA) to avoid the catastrophic outcomes that could result from the misuse of AI.
All of us are wondering what our future is going to look like, and companies aren’t quite sure where to draw ethical and legal lines around how to use these very new technologies in their own operations and offerings. (Some organizations have placed an all out ban on them.) The decisions we make today will shape our world and have reverberating impacts for generations to come.
At IDEO, we know that tools aren’t inherently good or bad—it’s about how they are designed. Our designers have been using these tools for years, and our clients are already experimenting with them. Instead of distancing ourselves from AI, we knew we needed to set clear guidelines, to pave the way for experimentation that aligns with our values. Rather than waiting for our future with AI to be imposed upon us, we’re choosing to intentionally design it.
This isn’t new for us. For more than a decade, IDEO has integrated data science into our design process. Early on, we created the AI Ethics Cards as a foundational element of how we practice algorithmic and product design when it comes to AI systems. And through initiatives intended to help us become more inclusive, equitable, and responsible, like our Inclusive Design Collective and a workshop to break down the potential unintended consequences of our work (the psychological and human rights impact, for example), we work to help designers think through the ethical considerations of choices we make and artifacts we create.
With the proliferation of generative AI, it was time to create more tools to guide our exploration. We dove into design research to understand how generative AI was showing up in our work and how we might make it both legally compliant and caring—built for healthy individuals, healthy relationships, and healthy communities.
Knowing that the rules are yet to be written for much of the legal precedent in this domain, we relied heavily on our technologists and the Inclusive Design Collective for recommendations. Their research informed this set of considerations, which also anticipate potential future legislation.
Next, we did a go/no go exercise, vetting the most popular tools to decide which types of data can or cannot be used, for what reason, and at which stage of our design process. And finally, we made sure that our guidelines included everyone at IDEO, from early adopters to those who are opposed to using generative AI at all. Here’s where we landed:
Download a free PDF of the Gen AI Legal & Ethical Framework cards here.
To make this line of questioning even more tangible, we offered some common ethical watchouts, and some ways that designers can offset negative effects.
For example:
Ethical Concern:
Images or content produced by AI might be mistakenly taken as factual or truthful and consequently cause unintended harm.
Mitigate with:
- Give credit when you are using AI-generated content, and put the prompt next to the artifact where appropriate.
- Validate factual information through a third-party source.
Guidelines for this rapidly-evolving technology are not a set-and-forget kind of exercise—they represent a snapshot in time. Likewise, it’s not possible to cover every scenario. We plan to regularly revisit and revise.
With these guidelines in place, we dug into the documentation of generative AI offerings from multiple companies. And to further mitigate risk, IDEO built its own “AI playground” tool to access a version of ChatGPT that complies with our policies and launched it to everyone in the firm—unleashing a tidal wave of AI-powered design that we’re using ourselves and bringing to clients.
Interested in testing out some of these tools and frameworks? We’d love to talk: ai@ideo.com
Download a free PDF of the Gen AI Legal & Ethical Framework cards here.
Thank you to: Esha Reddy, Evan Burton, Zoey Zhu, Graham Gardner, Jeremy Sallin, Jenna Fizel, Nohemy Stella, Tom Antony, Jeremy Chen, Alireza Karduni, Ilene Palacios, Kate Schnippering, and Bo Peng for your contributions to the guidelines.
The information provided in this article and on our toolkit cards does not, and is not intended to, constitute legal advice; instead, all information, content, and materials available on this site are for general informational purposes only. Information provided may not constitute the most up-to-date legal or other information. Readers should contact their attorney to obtain advice with respect to any particular legal matter. All liability with respect to actions taken or not taken based on the contents of this article are hereby expressly disclaimed. The posted content is provided "as is;" no representations are made that the content is error-free.
Get in touch