Covid-19 Contact Tracing App

Design Process

Low-Fidelity Prototype

The low-fidelity prototype was based on a series of interviews our team conducted with Northeastern students. After consolidating our interview notes, we created an affinity diagram broken into seven major areas of interest: Opinions about contact tracing apps, Privacy, Wellness Checks & Test Scheduling, Desired Info if Notified of Contact, Desired Info if Reported Positive, Pros/Cons of Iceland Tracking App and Other Requests. We used this data to come up with requirement states and tasks based on relevant information and user requests.

From the requirement statements, we began by sketching thumbnails on our own before discussing what the user scenarios might be as a group. After creating concrete user scenarios and personas, we created the first wireframe prototype using Figma.

Insights:

It was eye-opening to see how each individual in the group focused their attention on a different area of the sketches. While I spent a lot of time thinking about the best process to go about reporting a positive test, others focused their time on navigation or the confirmation process. If we had all just tried to start collaboratively making sketches immediately without first creating sketches on our own, we would have missed out on some interesting ideas, and our thoughts would have been less organized. Going through the ideation process in this way showed us how valuable it is to sketch, brainstorm, and explore concepts on your own before working in a group. The ideation process was also insightful because it allowed us to see what assumptions or biases we each may have had about the ideal design of the app. For example,  part of my design featured a way for users to “reset” their positive status once they recovered from the virus, which led to us asking “what happens after users are no longer positive?” Each of us had different insights about how to tackle this problem; we first discussed having the user handle their positive covid status, and then shifted towards making the covid status of a user more dependent on test results and northeastern administration.

Medium-Fidelity Prototype

Using the low-fidelity mock-ups we asked students for feedback on designs. We took that user feedback and reanalyzed our designs. This culminated in both layout, flow, and copy changes. After rethinking the design, we updated the user scenarios accordingly. We performed a formative evaluation of the designs with a different set of students. We took notes on issues and kept track of possible solutions/recommendations.

Finally we developed our Medium-fidelity prototype.

High-Fidelity Prototype

Once the medium-fidelity prototype was ready for testing, we constructed a summative evaluation plan. This included a list of objectives we wanted to test during the usability test. Based on our user scenarios, we developed test scenarios and tasks for the user to complete. We continued to develop our final prototype for students to test.

Final Report

While conducting our usability tests, we collected metrics on task-based efficiency, task-based effectiveness, and overall user satisfaction. To measure task-based efficiency, we measured time-on-task (per participant), as well as provided the users an ASQ survey at the end of each task to subjectively measure task difficulty. For time-on-task, we measured how long the user spent on each task (from when they navigated to the prototype link, to the moment in which they had completed the last subtask of the task). At the end of each task, before the user could move on to the next task, we asked the user to fill out a quick, three-question survey (ASQ) to rate the difficulty of task completion. The metrics were calculated using the online platform Qualtrics. 

Lastly to measure overall satisfaction, we provided the users a 10 question SUS survey after they completed all 14 tasks. The data was collected by the same Qualtrics survey linked in the appendix. The questionnaire asked questions about the user’s entire experience with the app, it was not task-based. In total, we used five different metrics to analyze the usability of our application. At the end of the usability test, we asked the users a set of open-ended questions to gain more insight into the user experience.