photo rotationphoto rotationphoto rotation

Interactive Grid

Role //

Web Development, Creative Coding, Interaction Design


Duration //

November 2022


Tools //

React.js, P5.js, Google Teachable Machine, ML5.js

This project is a collection of interaction design experiments presented through the form of a game. Using the Google Teachable Machine and the ML5.js libraries, I was able to gain a deeper understanding of how to incorporate interactions into designs—if there is a better method to design interactions for the Human-Processing-Model.



Live Website Github

cover image

Problem Space

The purpose of this website is to educate designers on machine learning libraries and methods to design for accessible interactions —a keyboard is often hard to use for those with disabilities that affect mobility, and hardware designed for accessibility is often expensive and uncomfortable. The information I am conveying on this website is a variety of experimental interactions that users can activate using their hands, faces, random objects around them, and sound. Users are encouraged to experiment with how to use the interactions to:

1. Set a color to a module.

2. Move across the grid.

Interactions

Shifting away from using the keyboard and mouse, I wanted to utilize sounds and hand signals to interact with the computer—thus, the main forms of interaction were webcam, speakerphone, and audio. These experiment look at how other modes of interaction can heighten—or complicate—a user’s experience navigation through an interface.

Color Assignment

Visual Detection

When the user moves to a block, they have the option of coloring the block blue, purple, or pink using different hand gestures. I used the ML5.js handpose library to be able to detect the number of fingers being held up and connect them with different colors they would represent.

Model in action

Audio Detection

I used the Google Teachable Machine to incorporate auditory interactions in my website, training the models to be able to detect knocks, crinkling of paper, and claps.

Training Google Teachable Machine to recognize sounds

gtm

Model in action

Object Recognition

Thinking about different ways users can use a webcam to assign interactions, I decided to have the ML5.js objectDetection library

Demonstration of how object detection interaction functions. First assign objects to color, then use objects to color modules.

Movement

Speech Recognition

With the help of the Ml5.js library, when the user communicates whether they want to go up, down, left, or right, the interface detects the sound and moves on the grid accordingly.

Audio interaction in action

Facial Recognition

Thinking about different ways that you can go about controlling the directions, I wanted to think about other ways that I came up with the idea where I would use my face as controller, using the ML5.js faceMesh library I was able to track when my face was turning left, right, up, and down.

Face mesh for opening mouth, turning head left and right, and moving head up and down.

Demonstration of how a face may be used to control movement throughout the grid

To detect when a user turned their head right, left, up, and opened their mouth to move down, I found the points in the Facemesh library that corresponded to the left and right of the cheeks, the forehead, and the mouth and found the distance of those points to the nose—when the distance passes or goes under a certain value, the grid would move.

Interface Design

Since many of the modes to interact with the interface of experimental, a big part of designing the interface was aimed at educating users on the purpose of the interactions and the process of onboarding users.



With the addition of a home page, individual descriptions to describe the interaction and libraries used, and pop-ups to onboard users, users will understand the purposes of each interaction and how it can be used to design for accessibility.

Original Figma mockups

gtm

Project Reflection

These different experiments allowed me to understand how I could implement different interactions into an interface, and how they could make my interaction design both more intuitive and usable for the user. The process of onboarding was especially hard to design for, since many of these interactions are experimental, and users are not expected to know how to interact with the interface on first use.

Moving forward, I would like to move past relying on the camera and speaker to be the primary hardware that detects interactions and explore how physical objects can be used in interaction design.