5 Takeaways From TED AI 2024
< thinking

5 Takeaways From TED AI 2024

Hint: They all reinforce a human-centered approach.
words:
Becca Carroll
Dan Read
visuals:
Beth Holzer
read time:
7 minutes
published:
October
2024

We need to start pulling the human signals from the technological noise, and weaving them together.

That was a key message we took from our time at the second annual TED AI conference, hosted in our own backyard of San Francisco. Two days full of panel sessions, robot demos, and classic TED Talks had us considering the nature of time, the power of language, and transcending human cognitive limitations. The biggest theme: The urgency for bringing human centered-design to AI is only getting stronger.

Here are five key takeaways from our time on the ground.

An abstract side profile of a face in shades of blue, pink, and orange, set against a black background. The face has exaggerated, graphic shapes for hair, eyes, and mouth, creating a modern, stylized look.

Beware the black box

The black box approach to AI’s foundational models are the root cause of issues around ownership, copyright, credit—and in the case of certain industries like healthcare—exclusion of entire userbases.

Not knowing what the training data was, where it came from, or who it originally belonged to, means that:

  1. We’re not able to give credit or reimbursement to the original creators of that data.
  2. There are unknowable but impending legal ramifications from a copyright perspective.
  3. Healthcare professionals are unable to use models such as ChatGPT, because without being able to verify or validate the training data, they’re unable to confidently (or even legally) use the answers the models provide.

Already, though, some companies are pioneering more transparent and more widely-usable data sets and model developers. HOPPR, for example, is building the first multimodal AI foundation model for medical imaging and radiology, while the non-profit Fairly Trained is based on the belief that that many consumers and companies “would prefer to work with generative AI companies who train on data provided with the consent of its creators.”

The concept of designing for transparency is reflective of a lot of research we have done in the past with users and data. Our 2019 work with Mozilla exploring more ethical and user-centered solutions to the Surveillance Economy highlighted the pitfalls of data collection, and the opportunities for tech companies to build value by handling user data in more human-centered ways. Combine that with our blockchain work at IDEO CoLab, and there are lots of interesting spaces to explore and draw inspiration from—including individual and collective control, consent, and value generation.

A similar side profile of a face in shades of blue, pink, and orange on a black background. The style is abstract, with bold colors and minimal details.

Consider the human consequences

A lot of industry specialists are focused on the near-future of the technology, as opposed to the mid-term opportunities and AI’s long-term effects on humanity. Some speakers were concerned that we're spending too much time worrying about what the technology will do for us, while others think we're too concerned about what it will do to us. There was a lot of focus (for better or worse) on the productivity benefits, but relatively few people were thinking about the consequences for humans.

Eugenia Kuyda, founder and CEO of Replika, noted that we should “stop designing for productivity, and start designing for joy,” which is a lovely provocation. We don’t need to forsake one for the other, but designing for joyous experiences can make productivity far more desirable. Our work with Ethiqly around integrating AI and writing exercises in schools focused not on the power of the technology or how many more essays kids could write, but on making the process of getting your creativity on to paper more joyful and rewarding, and focusing on the relationship between teacher and student.

There were also several conversations around the idea that technology will free up time for us to spend with our families instead of working. As romantic as this sounds, it’s not realistic. Organizations will simply reduce headcount, and then increase the other workloads for employees. Articles like this one from the BBC hint at a workforce feeling more burnt out due to AI implementation, showing a glimpse of the human consequences we should be designing for at the application layer.

An abstract side profile of a face in bold colors with a black background. The face is rendered in light blue with orange and pink details, creating an impression of intentional glitch or abstraction.

Embrace quality over quantity

Several people spoke about the exponential growth of investment in computing power for large language models (LLMs). Currently, 70 cents of every dollar invested in AI models goes into crunching the data.

As an alternative to this massive use of resources, several speakers brought up the concept of small language models. SLMs are literally smaller models, trained on just 1 billion tokens. (LLMs can require upwards of 1 trillion tokens.) These SLMs can be created with a fraction of the time and energy, and offer great specific utility. Consider the example of the telecom call center. Its model doesn’t need to be exposed to the nuances of Shakespearean sonnets; it just needs to know the words and phrases needed to operate a call center, so you can spend less time, money, and energy on creating your model.

With the tens and hundreds of millions of dollars we’re spending to train LLMs, how much are we willing—or is financially feasible—to spend on models? If we continue down this path, the financial / physical / electrical requirements are going to become unsustainable. One panelist mentioned that “we should be feeding the machine the right information vs. all the information.”

The potential futures for more intentional and nimble SLMs seems much more approachable and both financially and ethically feasible.

A stylized side profile of a face with bright orange hair and blue, pink, and purple facial details, against a black background. The colors are vibrant, and the style is modern and minimal.

Spoken language is irreplaceable

One of our favorite speakers was Jessica Coon, the linguist who consulted on the movie Arrival. She argued that human language is rich, nuanced, and endowed with the histories of our cultures. An input text box attached to a keyboard populated with Roman characters is never going to be able to express a fraction of it.

Rikin Gandhi, the co-founder of Digital Green, a company using AI to provide agricultural information and guidance, spoke about the company’s work with farmers in rural India, where illiteracy and nuanced local dialects and translations make-text based offers unviable. But AI models’ abilities to understand spoken language, locational context, and nuances of a given agroecology and a given time of the year means those farmers can interact through images, video, and spoken word.

This perspective shines a spotlight on the potential for more diverse and inclusive design principles, and our spoken language is a key part of that future.

An abstract side profile of a face against a black background, rendered in blue with orange and pink details. The face has a neutral expression, with large pixelated black text overlaying it. Some of the words are obscured with letters deliberately distorted for an abstract effect.

Discover a new impetus for responsible design

We also heard from Pratyusha Sharma, a leading researcher and member of Project Ceti, who shed light on how we might be able to leverage generative AI to understand the unsolved mysteries of the ecological world.

Sperm whales communicate through clicks, or codas. While researchers for decades have understood that this is the primary communication pattern by which this social species coordinated, it had been impossible to untangle the spoken language and what it meant. Sharma spoke about a novel use of generative AI—using a generative adversarial network, or GAN, to surface both the alphabetic principle and deeper understanding of coda meanings. With this technology, not only could we hear the sperm whales’ communication—we can understand what it means and predict from their communications what they might do next, whether that’s diving for food or coordinating childcare.

As AI tools become more and more powerful in helping us understand the language of the world around us, the impetus for responsible design will become clearer and clearer. If we can start to interpret the language plants, animals and earth use to communicate, how can they not be a stakeholder in everything we design?

Curious how human-centered design could help you make more desirable AI products? We’d love to chat.

No items found.
No items found.
Becca Carroll
Executive Design Director
Becca is a business designer living, working, and ODing on nitro cold brew in San Francisco. She loves small dogs, spin class and Harry Potter, not necessarily in that order.
Dan Read
Head of Brand
Dan has over 15 years of experience in design and marketing, and leads IDEO's brand. An interaction designer at heart, he brings an interdisciplinary and human-centered philosophy to all of his work.
Beth Holzer
Marketing Visual Design Lead
Beth brings ideas to life with visual design, using craft to add context and texture. She specializes in translating complex ideas into imagery that tells compelling stories.
No items found.

Get in touch

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.