This process log is a part of series from my HCDE 451: User Experience Prototyping Techniques class taken in Autumn 2023. This class exposes students to several prototyping techniques to solve several types of user experience design problems.
Prototype Context & Protocol
This mini-sprint is a “Wizard of Oz” style experiment, which involves having a participant test out a mock-up product. They will think that the product is functioning, when in reality, one of the researchers is making the product work behind the scenes, similar to the “man behind the curtain” from The Wizard of Oz.
In our experiment, we’re going to simulate a recipe generator bot within Facebook Messenger. Essentially, users are going to describe what ingredients they have on hand, and the bot is going to tell them what recipes are available to them. Users will have the flexibility to submit their ingredient preferences in three different formats: text, audio, or image. These recipes will vary in complexity, offering three distinct levels for the user to choose from simple, moderate, and hard. Additionally, users will have the option to request more recipes or specify how they’d like the recipe given to them. We will have a facilitator who guides the user through the process of active participation.
Behind the scenes, one of our researchers will be sending over ChatGPT-generated recipes to the end user, thus simulating the responses made by a bot/AI. We’ll be using Zoom to capture the user’s phone screen, providing a live view of their interactions with the chatbot. Concurrently, a separate camera setup will be employed to record the user’s facial expressions and reactions during the test. We will also have a researcher recording notes as the experiment is happening.
Analysis
The prototype was generally received well by the participant, despite being a bit nervous about using our “bot” due to her lack of experience using other AI. The smooth execution of this experiment is likely what subsided any nerves or apprehension from the user. Specifically, what worked very well with our experiment was the medium in which we decided to use for this experiment. As far as we could tell, the user legitimately believed that she was talking to a bot on Facebook Messenger. Another aspect that worked well was the dialogue between the “bot” and the participant. Our group was able to communicate accurately to how a bot would respond to a person both in tone and content. This also allowed us to gain insight into how a person would naturally react and choose to interact with a recipe bot similar to what were were simulating.
While the prototype was successful overall, there were still some things that we could have improved. The biggest challenge for this experiment was responding promptly to some of the user’s more specific requests, or when we were trying to accommodate for substitutions in a recipe we would provide. Because we had someone googling what the user was requesting, some requests took longer than others to answer, which was something that the participant did notice and comment about. Furthermore, due to the scope of this project, we were not able to accommodate directly how the user initially wanted to engage with the bot. The participant wanted to speak directly to the bot, which was something that we were unable to accommodate at the time, but for future research, this would be something that we would be prepared to accommodate.
Going into this experiment, we had three specific questions we wanted to address:
- How will the user choose to initiate the conversation with the Bot? (text, audio, image etc.)?
- What level of recipe complexity would the user prefer and how would they vocalize that?
- Would the user like to be walked through the recipe or would they like to have it emailed to them?
We obtained very clear answers for the first and third questions here, being that the user very clearly wanted to engage with the bot vocally/audially and that the user preferred to have the recipe emailed to them directly. We unfortunately did not get a clear answer at all to the second question. The topic of recipe complexity was not brought up naturally, and we tried to prompt the user as little as possible.
We did receive other critical information about recipes given to the user. After being given some potential recipes from the bot, the user realized that she wanted to have stovetop-specific recipes, which we had to quickly account for. While this isn’t necessarily “complexity” related, we originally did not account for the method the user would want to cook, and this is something that we would keep in mind if we were to re-run this experiment.