Experience Design Cover Image

AI Development as Experience Design

Posted on 13th January 2025 by Matthew Ager

Yesterday, over a Sunday morning cup of tea I browsed two articles which, while on the face of it look entirely unrelated, managed to combine to elicit in me additional understanding of “who we are” as developers of an application powered by Generative AI, and moreover where our responsibilities might lie.

The first article was an interview in the previous day's Guardian (11th January 2025) with Brian Eno and collaborator Bette Adriaanse to talk about their new book What Art Does: An Unfinished Theory, and one passage in particular concerning the future in light of Generative AI. It referenced the title of the article: ‘being revered’, Eno feels, is others’ “way of excusing themselves from being creative” and, indeed, goes on “whatever it is you think is so brilliant about me, you could do.” So, could AI?

Eno continues, “I think it’s not possible unless you assume intentionality on the part of something being made. […] In fact, I’ve been close to being an AI artist for quite a long time, [but] I have, first of all, had the idea to do [the performance]. Secondly, I built the apparatus by which they are made. And that involves a lot of decisions”.

The second, shared by our collaborator over the last year or so, Sirin Soyoz, was a write up of an experiment comprising almost 1000 maths students who were randomly allocated one of two "GPTs" to find out not only what the effect would be on student grades when working alongside them, but also what it would be if access to them was subsequently taken away. The first GPT, “GPT Base”, was just prompted with its role as a maths tutor and the problem; in addition to this, the second GPT, “GPT Tutor”, was given examples of solutions and feedback, while being instructed not to provide complete solutions to the students.

In short, the results were that while the student performance increase was statistically significant when working with either of the two GPT Tutors over and above the control group (no GPT access), performance subsequently and significantly degraded for those students assigned GPT Base but did not for those assigned GPT Tutor, and with Eno’s ideas and our experiences to date in mind, I feel this is exactly what we should expect!

Having been working with Generative AI both in developing Noticing and Noa’s abilities, and as day-to-day as a ‘copilot’ as a developer, the overriding patterns that I have noticed are

  • The more I (am required to) put in, the more I get out
  • I am prone to wasting time in the constant hope that I can take a breather and do less while AI takes up the slack, but eventually I must do it the right way anyway.

Eno isn’t just advocating the need for Human + AI / Hybrid Intelligent systems, but carefully considered delegation of the AI’s role and tasks by the Human, thereby retaining their own creativity and agency within the relationship – there are no shortcuts! GPT Base, in being allowed to offer solutions to the students, feeds their hope that they, like me, can take a breather, but all that ends up doing is undermining their own learning and ability to learn (see Guy Claxton's 'learnacy'). The authors view GPT Tutor’s restricted abilities to output solutions as safeguards, and this aligns fully with our view that by instructing Noa to mediate interactions between users and potentially dangerous unfiltered AI experiences, we are hopefully safeguarding teacher agency.

Whether Brian Eno in building AI systems which make music, the researchers through their prompt development of GPT Tutor, or us through our design and development of Noticing and Noa, those involved in building AI solutions take on the role of Experience Designers. In fact, for teachers and app developers this has always been part of their role – teachers design experiences for their students, app developers design experiences for their users (UX) - but with the advent of AI, experience design must be promoted to a core skill. Brian Eno has already noticed “the times [AI] works are when people are very careful about what goes in and very critical about what comes out.”

As I look forward, then, I can’t help but think surely a very useful way when considering how one might use Generative AI, whatever the domain or project, is through the lens of designers of experiences rather than providers of finished works, products or solutions. In so doing we give ourselves the opportunity to consider and design safeguards to protect our users from inadvertent harm.

Written by Matthew Ager

Matthew Ager is Software Architect and Co-Founder of Noticing.

Following his PhD in Applied Mathematics, and two years lecturing Mathematics and Physics, he has almost 15 years experience in product design and development. His professional motivation stems from recognising and understanding patterns in data, both quantitatively and qualitatively.

Matthew is naturally a reflective practitioner, with a keen ability to notice and articulate the subtleties of his own behaviour and that of others. He is passionate about helping others to develop their own reflective practice through technology, for greater wellbeing and professional development.