Siri AI

Role //

Product Designer, Frontend Developer


Duration //

Jan 2025 - Present

In this project, I look at how generative AI can improve CUI (Conversational User Interface) interactions, specifically Siri. The experience was designed in Figma and developed using Next.js for the frontend, with generative capabilities powered by the OpenAI API.



This is an ongoing project, check again soon for more details!



Github

Problem Space

Currently, CUIs struggle with handling non-command user requests. When faced with such inputs, their responses are often no more advanced than a standard online search, providing results similar to what a human could find independently.

This presents an opportunity for generative AI to offer users actionable insights and tailored responses beyond simple search results.

Command prompts for Siri

Use Cases

01: Multi-App Experiences

Generating tailored advice, which populates the appropriate apps.

For example, when learning to bake a cake, Siri places a recipe in the Notes App, a series of timers for scheduling in the Timer App, and a cooking playlist in Spotify.

Mini-App Journey

Design

I wanted users to interact dynamically with the generated content, enabling actions like regenerating, editing, and rearranging it to refine responses.

Tiles, Prompt Box, and Input Box Components

Tile Layout

Prototyping

To start the project, I began by exploring the capabilities of generative AI, experimenting with various prompts to generate the most detailed responses. I developed the following prototypes to understand how to integrate a generative AI tool with the fronten using Next.js and the OpenAI API.

01: Event Generation

I started by experimenting with how well the OpenAI API could generate steps and action items.

Event Generation Prototypes

02: Regeneration

When prototyping for regeneration, I thought about how users currently interact with Generative AI tools. I played around with cases where users would want to:

1. Completely change the generated content
2. Add/Remove from the generated content

With a bit of prompt engineering here is the end result:

Regeration protype when asked to change directions and add

03: Grouping and Organizing Content

I then moved to exploring how the OpenAI API can help with organizing and grouping the generated information into a left, middle, and right column.

Tiling prototype when asked to bake a cake

Dashboard UX

After generating the tiled app views, I recognized that the interface was naturally evolving into a dynamic dashboard. This realization prompted me to explore how tile resizing and the addition of new columns could enhance the layout and overall user experience.

01: Tile Resizing

To enhance interactivity, I introduced an active state for individual tiles. When a user engages with a specific tile, it highlights with a distinct border color and expands vertically. This helps users see their edits in context, emphasizing how their input contributes to the broader dashboard experience.

Tile Resizing

02: Adding Columns

To accommodate growth, users can add an additional column of apps to their dashboard—a scaled-down implementation of future functionality. This sets the foundation for a more customizable experience, where users will eventually be able to generate and specify the types of apps they want to include.

Adding Column to Dashboard

03: Loading Screen

To provide users with clear feedback during result generation, I designed a loading experience inspired by Apple’s AI design system.

Loading

More to come soon!