I was a member of a team of 4 user experience researchers on a project to help a startup still in an incubator. Their products were still very early in development and they had been making very significant changes to their core functionality on a weekly basis. My team had been in contact with solely the founders who were neither the developers or designers of their educational iPad app.
How do you help empower an early startup with user feedback?
Since our clients were very interested in tackling many problems with their app, it became very important for us to define attainable goals. We had scheduled several bi-weekly meetings to get face time with our client and the current version of their application. Since we were working to help test a user and an administrator mode to their classroom app, the decision to have 2 separate test groups. We would test 5 users for each group to reach over 75% of the problems of the app cost effectively [according to Jakob Nielsen]. To ensure all of our testing instruments [the app, lab cameras, questionnaires, tasks, software, etc.] were in order, we planned to also do additional trial tests with 2 additional users.
Upon creating personas of the administrator and student, my team found a college campus to be a suitable location to recruit participants from. We chose to recruit college freshmen for the user mode and graduate students for the administrator. The participants were recruited with different screener questionnaires. Our evaluation tasks were minimized to allow for the tasks to take an estimated 30 minutes per session.
Due to some circumstances beyond their control, our clients were not able to secure fully working prototypes in time for the usability tests. I had suggested several prototyping apps that could make PDF files interactive. However, the clients simply didn’t have all screens mocked up as well as interactive elements that could not be replicated in app. I had the idea of just taking the static PDF files and create paper prototypes that my team could use to emulate iPad app interactions. Through several additional hours of practice and validating our accuracy with the client, my team and I had it down. I created “stage markers” in tape on the testing desk that stayed in frame of the web camera used to film each usability test.
Our user tests went very close to planned. I was a little unsure of how all the users would react to viewing paper screens as opposed to an actual iPad. To my delight most participants were able to dive right in after receiving instruction. It was great to moderate with the clients present during some sessions. My team and I felt it would be very helpful for them to hear directly from people similar to their potential clients about positive and negatives to their app experience.
The feedback received by most users was in line with many of the concerns that we had identified for the clients prior to testing. There were several recommendations and suggestions my team had made during our client meetings to make a potentially more user-friendly app. From our test results we had categorized the user feedback according to in test reactions and the post test questionnaire. As a conclusion to our work together, my team had created a presentation and report of our findings to summarize our findings and offered suggestions.
Though not all of the users we tested were very excited to continue using the product beyond our user test, my team did an excellent job of clarifying some of their gripes. Our job was to help our clients see beyond the app to see what drives their users. The paper prototype was a great tool that the client team had never expected could show them how people could use their app. They had taken the results of the user test and told my team they would work hard on implementing some of our functionality suggestions. I found the experience very enlightening to the importance of communicating goals.