loading

Show info
Amy Roberts

Amy Roberts is a product designer currently based in Austin, TX.

Published February 17, 2016

Behavioral Prototype: Xfinity X1 Gesture Recognition

Prototype

Design

We were tasked with building and testing a behavioral prototype for a gesture recognition platform. We chose to use Xfinity X1 because we had easy access to the system and its interface was simple enough for us to apply gesture controls to.

We first considered using 2D interactions to control the system, but we settled on 3D interactions because they seemed ultimately more intuitive and less cumbersome. Although we did not own a gesture recognition platform to test our prototype, we could use a behavioral prototype to simulate gesture recognition by operating Xfinity X1 with a remote in accordance with the user’s actions.

In designing our prototype, we sought to explore the following questions:

  • How can the user effectively control the interface using hand gestures?
  • What are the most intuitive gestures for this application?
  • What level of accuracy is required in this gesture recognition technology?

After considering these questions and the technical limitations of conducting our behavioral prototype testing session, we decided to narrow our focus to basic video function controls, because they could all be operated in a hidden area by someone with a remote. This allowed us to easily evaluate variations in user interactions.

Screen Shot 2016-01-27 at 10.54.25 PM

Our testing setup

Gestures

We created several gestures to control basic video functions in our prototype:

  • Browse: Move hand left and right, up and down
  • Select movie: Tap with hand
  • Play/Pause: Tap with hand to toggle
  • Forward: Move hand quickly to right, tap to stop
  • Rewind: Move hand quickly to left, tap to stop
  • Volume up/down: Move hand up/down
  • Exit movie: Flick hand upward

We created illustrations for each of these gestures and assembled them into an instruction manual that would come packaged with the Xfinity X1 gesture recognition system.

gestures-edited

Evaluation Session

To explore how intuitive our gestures really were, we decided to split the evaluation session into two parts: One where the user completes tasks without instructions, and another where they receive the instruction booklet beforehand. This way, we could easily see what worked and what didn’t.

We tested our prototype with one user, describing to them a scenario in which they have just purchased a new Xfinity X1 gesture recognition system and are now trying it out for the first time.

The following chart shows the tasks we gave to the user along with the gestures tested during each task. We went through the tasks first without instructions, then with instructions.

Task Gestures tested
1. Browse through the moviesand select one to watch. Browse
Select movie
2. Play the movie for a while, thenpause it. Play/Pause
3. Rewind back to an earlier
scene.
Rewind
4. Adjust the volume to be a little
higher.
Volume up
5. Exit the movie to go back to
the browsing screen
Exit
gesture-select

User selecting a movie

gesture-rewind

User rewinding

Analysis

What worked well

One thing that worked well was the way we divided our user test. In the phase of testing without instructions, many of the commands the user tried were similar to the ones we had created, illustrating to us that they were fairly intuitive. In our short interview after the test, we learned from the user that the system was easy for them to use. They liked the fact that they didn’t have to use a remote, and the gestures seemed fairly fluid.

What needed improvement

At one point in our testing, the user was unsure of what gesture to use for “exit video.” We had not explicitly discussed what to do if the user couldn’t figure out a gesture, so we ended up initiating the action on the remote after the attempt to continue the test. The user also experienced some lag, which made them wonder if the system was recognizing their gestures or not.

Effectiveness of design

Based on our evaluation with a user, our design was effective in allowing the user to intuitively control basic video functions. Our test revealed some important design considerations to take into account. Careful consideration must be taken to create intuitive ways of controlling less straightforward commands, like “exit.” The issue of lag was a downside to using the behavioral prototype method, but this also served to emphasize the importance of real-time interaction so the user can feel in control.

In future iterations, it might be beneficial to test several different kinds of interactions to see which the user most prefers. It could also be useful to conduct a longer test, since our gestural recognition platform is tied to movie watching. A movement that seems more intuitive at first might end up being tiring to the user over the course of using it several times during a film.

amy

0 Comments

Leave a Reply