top of page

ExplorAR - Augmented Reality (AR) Creation Tool

Research | UX | UI | Visual Design | Augmented Reality | AR

Group 2650 (1).png

Overview

 

As a part of my Master's program at DePaul University, I took a Capstone class that involved working with 3 other students and aimed to study and design an AR platform called ExplorAR that is targeted at users interested in augmented reality. After analyzing 8 existing AR platforms (competitive analysis) and performing a preliminary literature review, our team conducted live user interviews and a hybrid card sort. Using the findings of the interviews and card sort, we proposed a "no-code" platform for all types of users (expert and novice) to get inspired by an easy-to-use process for creating and sharing AR content. We believe our platform, ExplorAR, will be better adapted to the needs of any type of AR user than the existing AR platforms that we reviewed for our competitive analysis.

Time

 

10 weeks (Spring 2022)

Methods & Tools

 

User Research Methods

Competitive Review, Literature Review, Live User Research (Interviews), User Personas, Hybrid Card Sort, Information Architecture

Design Tools


Pen & paper, Figma, FigJam, Adobe XD, Optimal Sort

 
Practices

Design Thinking, Product Design, Interaction Design, Visual Design

The Challenge

 
Design an AR platform targeted at users interested in augmented reality

Our team was tasked to define an idea that we will work on throughout the 10-week course. When we began exploring the topic, our original aim was to devise an AR creation platform specific to the DIYer who posts on YouTube. However, based on the peer feedback we got on our initial proposal, our team decided to design an AR platform that is a simple standalone “all-in-one" application. 


Our guiding principles included that our platform is inclusive and accessible, giving the user the ability to create content regardless of their education, technical expertise, age, first language, or access to advanced technology.

 

Additionally, our team focused on simplicity, and ease of use and envisioned our product to provide onboarding guidance, encourage users to track progress and minimize the users' cognitive load. We decided to prototype our platform as a web app because it provides universal

 

Project Goals

Goal 1

Determine the features that users consider the most important in an Augmented Reality (AR) platform.
 

Goal 2

Determine the best process (or workflow) for users to create and share AR content
 

Goal 3

Determine the most effective information architecture for an AR platform.
 

Goal 4

Determine if providing “verbal guidance” from the platform negatively or positively affects the authoring creation process. If so, determine the most effective approach for integrating visual and audio information.

 

*For this final goal, we limited the verbal guidance feature to only guiding users through the main steps of the creation process and did not add audio to all features and functionality.I

Competitive Review

 

Each team member performed a Google Search to find AR-specific platforms and used “Augmented Reality Platforms/Authoring Tool” as the search criterion. After reviewing dozens of potential competitor products in the market, our team chose to analyze eight that included useful features for creating and sharing AR content.
 

Competitors Reviewed

We analyzed ZapWorks, ARCore, ARKit, Amazon Summerian, Adobe Aero, ROAR, Overly, and Mirra because of their market popularity, they represent both iOS, Android, and Windows and have a number of interactive features and specifications that we found desirable for our platform.

 

Product Analysis

Each team member explored 1 or 2 products individually and noted each platform’s strengths, weaknesses, features, functionality, and other important findings in an Excel Spreadsheet. We evaluated the competitors using three key factors: Ease of use, quality of use, and benefit of use.

explorar.jpeg

Conducting a competitive review helped our team identify key benefits for our AR web-based platform. One benefit is that it will allow the users to create AR content using a step-by-step approach that does not require coding. A second benefit is that we added verbal guidance to the “steps” feature of our mid-fi prototype, which assists users in the creation process, differentiating our platform from the competitors. Additionally, we included a diverse set of “starter” elements and tools in the platform that would enable users to easily personalize their AR content.

Literature Review

 

Each team member reviewed two or more peer-reviewed literature references related to Augmented Reality platforms. We then wrote article summaries and shared them with each other, following the guidelines as outlined in the “summary content” section for the individual class activity. We highlighted points of interest and thought of ideas on how we could incorporate select features/functionalities into our design.

 

Live User Research

 

Our team recruited 9 participants from the participation pool and through personal connections for our live user interviews. Before the interview, participants were sent an informed consent form and a screener questionnaire. The form asked them to confirm that they were 18 years old or older and use a 5-point Likert scale to indicate their awareness of AR, and their level of familiarity with creating AR content. We identified participants as either an “expert” or “novice” based on these responses, as shown in Table 3. Four out of 9 participants were identified as expert AR users, and 5 out of 9 participants were novices. Each team member interviewed at least 2 participants remotely via Zoom.

Considering the thoughts and impressions shared by the participants, we organized our interview findings into 3 main categories:

1. Ease of Use

All 9 participants highlighted having an easy-to-use AR platform that has easily recognizable UI elements and is simple to navigate to effectively create AR content. 

2. Simple Onboarding

Seven out of 9 participants said that they would prefer to have an AR platform with simple onboarding and “help” features. 


3. Functionality

Six out of 9 participants highlighted that since AR is “the next big thing,” the platform’s performance should be seamless and uninterrupted. Participants who had experience using an AR platform, SparkAR, noted the difficulty dragging and dropping objects to the workspace and a lag in the system during the content playback.

 

Based on these findings, we decided to integrate the following features into our system:

  • Easy navigation to help users find and utilize the main features of the AR platform and create content.

  • A step-by-step progress bar and verbal guidance help the user progress on the content they are making.

  • The ability to connect with the community to ask questions whenever needed.

  • A set of templates would allow the user to start somewhere and create content of their choice.

 

User Persona

 

After the interviews, we synthesized our transcripts and notes to highlight major themes. The themes and trends we analyzed from the interview data helped us create two user personas: Maureen and Alex. 

explorar2.jpeg
explorar3.jpeg

Design Charrette Session

 

After creating our user personas, each team member individually brainstormed navigation options for our platform, A total of 165 navigation options were generated and as a team we put them on the Excel sheet, performed an open card sort, and then discussed which cards and categories are important to include in our hybrid card sort activity. This activity also allowed us to understand the major workflow of our platform and gave us an the idea of what features are important to include.

Hybrid Card Sort

 

Using the insights from our design charrette exercise, we identified the primary categories and the cards that would nest within those categories. Optimal Sort was used to create and analyze digital hybrid card sort. Participants were asked to share their responses to the following pre-test and post-test questions:

  • Indicating on a scale of 1 to 5, their level of familiarity with creating AR experiences

  • Sharing any category that they felt we overlooked and should include in our platform.

 

The card sort was attempted by 15 participants, out of which three abandoned and 12 completed it. 7 out of 12 participants were not familiar with creating AR content, 3 out of 12 participants were somewhat familiar with creating AR content, and 2 out of 12 participants were moderately familiar with creating AR content.
 

Participants were asked to categorize 56 cards under nine categories.

  1. Assets

  2. Camera

  3. Edit

  4. File

  5. Help

  6. Insert

  7. Tools/Object

  8. View​​​​​​​

The overall percentage of agreement among cards and categories was low. The highest agreement rate reached among categories was 42%, and the lowest percentage was 19%. 

Lo-Fi Prototypes

After analyzing the card sort results, our team began sketching out the basic lo-fi prototypes. The team decided to use Figma for its collaboration capabilities which allowed everyone to work simultaneously on the prototype.

 

Prior to designing the screens, each team member explored similar AR applications using online creative platforms such as Dribbble and Behance. We then individually sketched ideas using paper and pencil and Procreate (Tablet App). We discussed which screens and features were essential and listed what we needed to evaluate in ourusability test session.
 

Style 

Our prototypes were created using a grey color scheme with a font that mimicked handwriting but was easy to read. We used universal icons, a basic logo, and a simple, clean layout that was not too overwhelming for first-time users.


Features

On our lo-fi prototype, we focused on the following key features:
1. Project Dashboard
2. Template Options
3. Project Workspace
4. Assets (Adding Elements, and Text)
5. Interactions Tab
6. Preview Mode

 

Information Architecture

We used the results from our hybrid card sort to organize the menu names and structure for our platform. Overall, our team created 36 prototype screens and made them clickable for our usability test sessions. We generated a shareable Figma link for our participants. Afterward, we translated each sketch into a separate Figma wireframe. We further refined our concept by iterating based on team feedback. We consolidated the best ideas from each team member's design into one prototype and made it clickable in Figma for our usability testing.

explorar5.png

Lo-Fi Usability Testing

 

We recruited 4 for our lo-fi prototype remote moderated usability testing.

 

Before conducting the test sessions, our team created a usability test protocol that included a brief introduction, a participant's statement of verbal consent, background questions, warm-up questions, definitions of augmented reality and content creation, a scenario, four tasks that participants were asked to complete, and demographic and wrap-up questions.

explorar8.jpeg

During the sessions, we encouraged the participants to think out loud while performing the tasks. Each task had a post-task question, where we asked the participants to rate their confidence level in completing the task using a 5-point Likert scale (1 = Not at all confident and 5 = Extremely confident). Each task was measured as Pass/Fail and we also noted the time it took for each participant to complete the task.

explorar9.jpeg

The table below summarizes the average completion time for each task and the confidence rating.

explorar10.jpeg

Mid-Fi Prototypes

 

Applying the findings and results from our lo-fi usability test sessions, we designed our mid-fi prototype. Instead of using Figma, as we had done previously for our lo-fi prototype, our team decided to use Adobe XD for its easy voice integration. Adobe XD has a built-in text-to-speech playback feature that allowed us to easily add text which produced system-generated speech. At first, we tried using Figma for our voice-guided prototype but had to install a plugin called “Anima” which was confusing, cumbersome, and time-consuming to use because it required embedding code for an mp3 file to play the verbal guidance.

 

For our mid-fi prototype comparison evaluation, we needed to produce a total of 106 screens (53 screens for Prototype A – Without Voice and 53 screens for Prototype B – With Voice Enabled).
 

Process 

We created an Adobe XD file revised our logo, and added color to the design. The screens were then made clickable using the “Prototype” feature of Adobe XD. We quality tested the two prototypes to ensure all features functioned properly before usability testing.
 

Design Changes

We updated the layout, visual style, and organization of content and navigation on our platform to better meet the needs of the user. Since we had two prototypes, the significant design changes took place on Prototype A which did not have verbal guidance. Prototype B, which included the verbal guidance feature, was a copy of Prototype A, and we only made a design change by adding the “Steps” feature at the top.
 

Color Contrast

For our mid-fi prototype, we used a monochromatic color palette with an accent color. We ensured that the colors had high contrast, and the text was easy to read. We tested our color palette using software called Stark, a contrast-checking plugin for Adobe XD.
 

Logo

We decided to name our platform “ExplorAR” and designed a logo that was based on the branding and colors used in our overall platform.
 

Elements

Since some users had difficulty differentiating between the terms “Assets” and “Tools” on our lo-fi prototype, we changed the “Assets” label to “Elements.” It contains icons, backgrounds, objects, text and drawing tools, photographs, videos, and sound clips.


Split Preview and Publish as Two Different Steps

Participants from our lo-fi usability testing were confused with having “Preview and Publish” as a combined step and recommended splitting it up into two separate steps. Therefore, in our mid-fi prototype, we split them into two separate buttons. A QR code and link are generated to be shared with others once the participant clicks the “Publish” button.
 

Prototype B changes

Our second prototype had all the same changes as our first mid-fi prototype. The only change we made was having a “Play” button in the “Step” feature at the top that indicated clicking on it will play a voice that will guide the user on what step they are on.

explorar11.png
explorar12.png

Mid-Fi Usability Testing

 

Our team recruited four participants from our connections for mid-fi usability testing. All four sessions were conducted through Zoom and were video recorded after receiving the participant's verbal consent. Before performing the test, our team created several artifacts.

 

We created a usability test protocol that included a brief introduction, a participant's statement of verbal consent, background questions, warm-up questions, definitions of augmented reality and content creation, a scenario, five short tasks for the first mid-fi prototype, and three questions to measure impressions of the second mid-fi prototype. Also, the protocol had demographic and wrap-up questions at the end.
​​​​​​​

Prototype A Testing

We utilized the greeting card scenario from our lo-fi usability testing. The tasks for our mid-fi Prototype A were the same, except we broke them into subtasks to make it manageable for the participants to explore our platform.

explorar17.jpeg

Each task had a post-task question, where we asked the participants to rate their confidence level in completing the task using a 5-point Likert scale (1 = Not at all confident and 5 = Extremely confident). Each task was measured as Pass/Fail, and we also noted the time it took for each participant to complete the task. 

explorar13.jpeg

We also noted the time the participants took to complete each task and their rating on a scale of 1 to 5 on how easy or difficult the task was and their satisfaction level for each. The table below summarizes the average completion time for each task, ease of task rating, and confidence rating. 

explorar14.jpeg

Our participants expressed both positive and challenging experiences while using Prototype A based on two main dimensions 1) Wayfinding and 2) Freedom and Flexibility

Prototype B Testing

For our second mid-fi prototype, we asked the participants to explore the interface and share any thoughts regarding the verbal guidance feature that we added. 

We asked the following questions to understand if participants found the verbal guidance feature useful or not.  

  1. What do you think about having verbal guidance in the prototype?  

  2. Do you think it will be useful for you when using the platform?  

  3. Looking at this interface, do you think it is clear enough that you can move back and forth between the steps? 

All four participants felt that it was a helpful feature; however, some modifications might be needed. Our participants also expressed both positive and challenging experiences while using Prototype B. 

Three out of 4 participants mentioned that play buttons for verbal guidance on the progress bar were not immediately apparent because they specifically lacked visual prominence. One participant (P1) said, “The buttons were not apparent upfront because the buttons were not that intuitive.” P4 said, “I think that's helpful. I just don't think we still need the step number.” 

One issue that we faced during the testing of Prototype B was that playing the verbal feature required an Adobe XD account. To eliminate placing the burden of account verification on our participants, we the facilitators shared our screen and followed completed the login process.

We utilized the greeting card scenario from our lo-fi usability testing. The tasks for our mid-fi Prototype A were the same, except we broke them into subtasks to make it manageable for the participants to explore our platform.

explorar15.jpeg

After collecting information about our two prototypes, we asked our participants which prototype was easier to use in our post-test debriefing. We had a split decision among our 4 participants. Two out of 4 participants said Prototype A was easier. Two out of 4 participants said Prototype B was easier.

 

Prototype A Preferred

One participant (P3) liked Prototype A’s features but acknowledged some positive aspects of Prototype B. She said, “It would be cool to see that the voice is playing and maybe highlight stuff on what the voice is saying.” Another participant (P2) preferred Prototype A because the second one startled her, and it was not clear to her that she can play and hear a voice. She thought it could be a conflict if someone is visually impaired then they will be using a screen reader. It may be useful, but it would be much more beneficial if we add what you do in the step. She said, “The buttons were not apparent upfront because the buttons were not that intuitive. The steps at the top with triangles seemed like arrows to progress to the next screen and didn't expect that it was a play button, and a voice will play.” 

 

Prototype B Preference 

One participant (P1) preferred the verbal guidance prototype and said, “This play button was easier because as I am progressing toward step 3, I can hear the sound. Because I am completing steps 1, 2, and 3 I know that it is better to have audio scripting behind each one. If the button has my consent – if I am pressing on that button and it is playing sounds, so I don’t think it’s annoying. It’s quite a good feature.” Another participant (P4) slightly preferred verbal guidance because some users may want it.  She said, “I wouldn’t say one is easier than the other. I guess it can be useful to have audio, but not a lot of people are going to like having audio if they are working in the library and they don’t have access to a headset. Maybe have a mute option.” 

Product Reaction Survey

At the end of our usability test session, we asked each participant to share feedback about which prototype they found easier to use. We also asked our participants to take a short survey at the end that consisted of Product Reaction cards. This survey was inspired by the toolkit that was developed by Microsoft to understand the “illusive, intangible aspect of desirability resulting from a user's experience with a product.” (Barnum, 2011).

Our survey included 35 different adjectives and allowed the participants to select any 5 to describe our platform the best. The analysis highlighted that 3 out of 4 participants (P1, P2, and P3) found that our platform was easy to use, clean and innovative. Two out of 4 chosen participants (P3 and P4) found that our platform was friendly, consistent, and creative. Using this product reaction survey allowed us to learn our participants’ initial impressions of our platform.

exploar16.jpeg

Diversity Documentation

 

Our team took several steps to include diverse participants based on age, level of experience with augmented reality, English as a second language, learning styles, vision impairments, hearing loss, access to the latest technology, and whether they were “digital or non-digital natives”. 
 

During one of the lo-fi prototype evaluations, we moderated an in-person interview with an 89-year-old artist with monocular vision due to suffering an eye stroke in one eye. 
 

Our mid-fidelity prototyping evaluations included 4 participants who spoke English as a second language. Two appreciated the verbal guidance because they found it helpful in knowing where they were in the creation process and commented that it would help the infrequent user get re-acclimated to the platform.

Project Reflection

 

Overall, we feel that we successfully achieved the goals of our project. We fulfilled our goal of providing an easy-to-use platform called ExplorAR, to help novice and expert users create and share AR-based experiences. Performing a card sort activity helped us in determining the information architecture for ExplorAR. Our prototype’s usability testing sessions revealed both the negative and positive effects of having verbal guidance in an AR platform. Our results showed that the tools provided are likely to be prioritized over verbal guidance for an expert user.


There were several limitations to our work on ExplorAR. Our interviews were exclusively conducted remotely on Zoom, and we sometimes faced lag during the session which caused us to ask the participants to repeat themselves. The relatively short ten-week timeline of this project was another limitation, which forced us to prioritize our efforts. 


In the future, we would like to interview more participants from different fields to further understand the user's needs and goals. We would like to interview YouTube or TikTok content creators, to know what is lacking in their field and what can be improved.


Overall, this project helped in exploring the AR field in-depth and allowed our team to see what goes into designing an AR platform.

Reflection

 

Barnum, Carol M. (2011). Chapter 7 Conducting a usability test. Usability Testing Essentials (pp. 249-285). Morgan Kaufmann. https://doi.org/10.1016/B978-0-12-816942-1.00007-1.

bottom of page