Medical Device Assembly Training
Product Design | UX / UI
The training tool is provided for Sunovion representatives upon completion of the Sales Training process. The main goal is to help reps understand and practice how to assemble the inhaler device priorly introduced in the educational material. A gamified approach is adopted to leverage user engagement and retention. helping the reps gain necessary information through an intuitive and fun interactive experience.
Discovery
Different from the patients who can take time to assemble their devices in their daily lives, sales representatives face a much more time-limited scenario in their presentations and pitches, which requires them to utilize their time more efficiently, and hopefully without hiccups. Hearing from the field, most of the reps had experiences of some sort of delay or failure in the earlier practices for the new products. It usually takes just a couple of rounds for them to get a hold of it. And our inhaler, which has a longer learning curve due to its multiple steps of assembly, will naturally require longer time than usual. Thus our goal is to have these hurdles of learning resolved as much as possible during the onboarding, by providing a thorough training package with targeted exercises.
Pain Points
The training is online and the actual package of the device might not be accessible for reps.
Lack of practice will lead to unfamiliar with the device when presenting
Reps have limited time to learn how and are less engaged to revisit the training docs
It’s frustrating for reps to fail repetitively due to lack of guiding, or not being able to practice at all
User study: What they see from the training (with old package)
User study: What they see in practice
Exploration
So how to achieve it?
Through a quick brainstorming, several possible approaches were addressed. From virtual experience to the in-person practice, from digital materials to hands-on models, the ideas were evaluated by how they could best achieve our goal, as well as our resources and time frame.
An idea with the name “virtual LEGO” stood out. It emerged first as an early concept of an interactive piece allowing users to click on the model and view the assembling process. After a few more rounds of conversation, it eventually grew into a more game-like, intriguing experience. It allows users to get more direct feedback for the action they take, while a good way to resource the 3D model been used in the other materials and products. Above all, it’s fun to play.
Objectives
The main objectives for the tool are also outlined to help define what mechanics to include and how to measure success:
Supporting the education of the perceived complexity of the device
Motivating users to increase speed
Helping users to increase accuracy
Engaging users to pursue better results through continuous practice
Metrics
The success of the tool will be measured in the following aspects
Improvement of understanding of assembly practice without the actual device
Improvement of rep’s completion time and accuracy
Increase of revisit and practice
Increase of communication and interaction with peers and instructors
Diving into the main objectives of the product, an outline of the tasks is defined as below, from the ones most relevant to the main goal to the less relevant functionalities.
Design Process
User Flow (detailed)
Wire Flow (early iteration)
Refining Wireframes (samples)
Design Comps (samples)
Testing (sample)
Challenges and Iterations
Every design journey has its own challenges, this one is no exception. The first and utmost question kept being addressed is: Is the tool really helping reps to get as close as can be achieved with real-device training?
The Fine Line between Reality and Digital Experience
To help the reps get familiar with the device even without access to the actual set, 3D modeling is adopted, allowing them to visualize the device and kick start the near-reality experience. It is not only applied to this particular tool but also globally to other marketing and training materials and related products. It was a bigger strategy to move from the 2D line-drawn style of the instructional graphics to 3D models and live videos to explain the use of the device.
But in this tool, they will get to explore and “touch” the device, although not, in reality, should have an impact on their real-life practice. With WebGL implemented, it became possible.
In this scenario, it was carefully considered to which level of reality should be carried to the design solution. Here is one of the early explorations to have the default game screen with components off-grid in random order, which is close to when they are taken out of the box and laid on the table in reality. When the correct piece is selected, the layout would switch to the view with the piece in focus, and other pieces surrounding it for the next step.
It seems like a reasonable solution since the end goal of the tool is to help reps learn how to assemble the device in reality. But after further research and testing, there are a few points missing from this solution.
Causing confusion of where to start with the clear indication missing
Users tend to linger to the components with the bigger size/brighter color, while they might not be relevant to their next move
The increasing chance of miss-clicking adds up user frustration, especially for smaller components
Discouraging users to move forward for the lack of hints
Mouse movement (sample)
Component selected
The Noise
Thus a different approach is adopted, with a more controllable, less distractive component tray, to help users better identify, interact, and memorize. But adding disruptors still makes sense to make it more challenging in the learning process, to avoid muscle memory only to the training tool but not the correct pieces. Instead of presenting disorganized assets, we introduced the noise in reality in different ways.
The assets would shuffle after a step is done. It’s added to make sure users thoroughly understand the steps and can identify the parts they are interacting with.
Adding unrelated pieces to the component tray. These additional items are from the same package with the device that could be a distraction.
Step 1
Step 2 with shuffled parts and unrelated items
The Clarity
Although some noises are added to mirror reality, it is also important to offer necessary clarification in the flow to support the learning process of the users.
Breaking each task into smaller steps, with clear copy on the labels and descriptions
The whole process includes quite a few steps. The complexity of the information would affect viewers’ acceptance and engagement if presented in an unfiltered, monotonous format. To help to memorize the lengthy steps, the big task was broken into smaller ones, in this case, naturally done by assembling small parts into different sections, then connect them together in the last step. An indicator of finished steps would be displayed to inform completion of the process.
Early iteration: No clear descriptions on steps. Ambiguous indication. Broad labels
Revision: Details of components and steps. Clarify each steps with description of each trackable movement
Creating hierarchy for supportive messages to feed users in different levels
1. Tutorial animation: High-level tutorial presented when user start the game for the first time to get them familiar with basic interactions to be expected.
Users can access the tutorial any time during the experience, but doing so will abort the current game play.
2. Overlaying hints: The subtle clue to provide immediate feedback of behavior and gestures. It helps to avoid swamping users with too much information or interrupting the flow with unnecessary reading.
Interaction hints for rotation
Pattern for assets tray
Normal
Lifted
Triggering interactive area
Completed
Assembled part
3. Hints to remind next steps: Shown when the user is stuck during the gameplay. The text hints will show up after 3 wrong tries or 10 seconds of inactivity, which would provide necessary help without giving the answers right away and interrupt memorizing.
This hint is also a sign for possible improvement since when users see it, it means they might have spent too much time (not speedy) or misclicked a lot (not accurate)
Measuring Scales And How To Improve Stickiness
The Two Scales Used To Measure Proficiency
There are two major scales to measure the proficiency of the users: speed and accuracy.
Speed is the visible measurement. A timer is shown on the screen to inform how fast the users assemble the device. A counting down is added before the timer starts to get users ready and help to time more accurately. To align with reality practice, interruption of the gameplay would abort the current assembly (replaying tutorials, checking profile page, etc.).
Accuracy is a relatively hidden scale. With WebGL implemented, the area users can interact with is set specifically. It would increase the chance of miss-clicking, but provide more detailed training on how and where to interact properly. Any mistakes made would cost time for another try, which adds to the total time of the gameplay, and is reflected in the end result.
The light-hearted challenge can improve retention
To improve user engagement, the challenge is added to the experience.
1. Self-challenge
After every game, the result would be presented on the completion screen. The best score would be recorded and shown in the dashboard. Three types of badges are granted based on the results: ranking, best score, time played. To earn higher-level badges, users would have to keep challenging for better results.
Completion of game
Dashboard
2. Challenge others
Ranking of all users (in the system) would be presented in the leaderboard, and encourage users to challenge again. After the game play, users can share the results with email to invite others (in system) to challenge.
Outcomes
The Assembly Challenge tool designed for the rep training was a great success and became a star piece of the package. It was launched in 2018 and quickly gained popularity and love from the client’s team. To my surprise, it was used as a showcase of the device internally to stakeholders not limited to the reps team, which proved it went beyond its original purpose and goal.
Due to the limitation of information allowed to be shared with us, more detailed feedback and direct follow-up with the end-user were not accessible. It would be great if there’s an opportunity to learn more from the users in the field, but alas, such is agency life. :D