Show info
Amy Roberts

Amy Roberts is a product designer currently based in Austin, TX.





iOS App


2016 (8 weeks)


Sound Capstone course, University of Washington


UX Design, UI Design, Identity


Todd Maegerle
Alex Melnik
Amy Roberts


Numerous apps support anonymous text-based interactions, but there is a noticeable lack of social apps for recording sound. While many social media platforms incorporate location tagging, this is often used for text or images, not sounds. We wanted to explore this space by creating a platform for anonymous users to share sounds with each other.


Soundscape is a location-aware sound recording app that allows users to interact with each other and the world through sound. A user will record a sound sample on their phone, and then “place” it at their geographic location. When another user walks near the location where that sound was placed (e.g. within ~10 feet), the sound sample will automatically play on their phone. All samples are placed anonymously—there are no sign-ups or accounts. When “listening mode” is enabled, you can discover a world of sounds recorded by other users.


l worked with two developers over the course of eight weeks to create a functional iOS app. As the designer on the team, I created wireframes, designed an interactive prototype using InVision, created a logo and icon, created graphics for the iOS app, and designed a poster for our final demo. I also conducted user testing throughout our process to evaluate our progress.


The purpose of this project is to create a simple yet enjoyable app that lets users share sounds with each other. Anyone who downloads the app will be able to record a few words, nearby animal life, or just ambient atmosphere. One can then “place” the sample where they are standing. Afterwards, anyone who walks by that spot will be able to hear the sample where it was originally placed.

Over the past few years, geographically aware social media apps have flooded the market. Some of these include yik yak, where users post to a public wall for people within a 5 mile radius to see, and similar photo-sharing apps. Soundscape targets those users that have a desire to discover artifacts placed by other people. It serves as a “geocache” for sounds, where instead of scrolling through a list of posts, users take their phones outside and find where others have placed samples.


How it Works


UI Flow

The functionality of Soundscape can be broken down into the two following key components: The sample recorder/geographic placer, and listening mode. Both can be accessed from the main screen of the app.



For the logo, we used a circular shape containing a simplified form of a sound wave to signify sound samples. We added perspective lines to the app icon and other elements to suggest a three dimensional landscape on which to place sounds.




We created a prototype using InVision so we could visualize the app transitions and have a working model that could be tested with users before development. InVision allowed us to easily work in an iterative process, adding or removing features as necessary based on the results of our testing.

We tested our prototype with users throughout our process to ensure usability, tweaking icons, button placement, and overall functionality so that the app was easier to use. It also helped inform our visual design, allowing us to test a second version with a purple background. Overall, users preferred simple functionality and the visual lightness of a white background.



We needed a database to send appropriate samples for playback as the user moves from one location to another. Facebook’s Parse, which is discontinuing in 2017, was used as our online server. Soundscape uses a unique application and client id to send and receive data from Parse. Sounds that user’s record are stored in Parse with a geographic tag. When a user enables listening-mode, a parse client, installed within

Soundscape sends query requests to Parse. Parse compares the user’s current location to each sample’s geographic location and returns an array of nearby samples. While the Parse client updates the array of nearby samples, a background thread sifts through them and plays back those that are only a few feet away from the user.


We used Apple’s AVFoundation framework to record, encode/decode, and playback audio files. The AVAudioRecorder to accessed available microphones and recorded samples when invoked. A simple AVAudioPlayer received samples from the background thread as well as ones the user immediately recorded.


We presented our app and held a live demo in the lobby of the Paul G. Allen Center. To communicate the purpose of our app to visitors, we created a large format poster.



To evaluate the success of our app, we conducted a live demo in a large room with users, giving them an iPhone with the app installed and a brief introduction. We measured whether they were able to perform the tasks of first recording and placing a sound, and then discovering sounds placed by other users.

The three users we tested our app with were able to successfully perform all tasks and found the experience of listening to other placed sounds enjoyable. The GPS signal was not always accurate, but the discrepancy was not large enough to significantly hinder a user’s ability to discover placed sounds in the area.

We made some tradeoffs in our design in order to create a functional app and an enjoyable user experience within our scope and timeline. We wanted all of the functions to be easily accessible, so we kept everything on one screen instead of hiding features behind a menu, but this risked making the user interface too cluttered. We used Parse due to the convenience of implementation, despite the fact that it will expire in January 2017. It allowed us to not worry about much of the back-end management details, and also provided an excellent interface for storing and fetching geographic data points.

Based on the feedback we got from users, the app we created has the potential to engage users in an enjoyable experience of placing and discovering new sounds. While it is beyond the scope of our project, testing interactions between multiple users at once would be a logical next step in evaluating Soundscape’s potential.

IMG_6436 (1)IMG_6444 (1)