TC Design
Data-driven design
What is data-driven design?
Data-driven design is a method of making design decisions based on data rather than intuition, personal preference, or opinions. While the discipline does have the word ‘design’ in it, it is not the sole responsibility of a UX designer to push for data-driven design. Instead, the entire development team, including product owners, change managers, and even scrum masters, should be looking for ways to use data to make decisions about improving their product or service.
Why is data-driven design important?
Using data to drive decision-making minimizes assumptions and guesswork. While we are always experimenting and learning with agile methodologies, using data helps us make better decisions about which features to add or remove, and how work gets prioritized. The essence of great product design is how well a team can balance user needs and business requirements, and data-driven design is our best counterbalance.
What tools can be used to gather data?
Data can be broken down into two main types: quantitative and qualitative. Quantitative data is considered objective data that can be measure through concrete numbers or values. The questions we try to answer with quantitative data are what, when, and how something happens. Qualitative data is subjective data that cannot be measured objectively and typically answers the question why something happens.
The tools we use to gather data and analyze findings varies depending on which type of data we are looking at.
Quantitative methods and tools
Analytics tools provides us with data about user behaviours based on the actions they take while there on a web page or using a specific feature in a software application. These tools are especially useful because in addition to the user behaviour data, they can also identify demographics and user types. For example, perhaps you notice that 60% of your users aren’t uploading photos before completing a service request. The business is concerned because legally there needs to be photographic evidence for all requests in case the government is sued. The analytics tell us that 95% of the users not uploading photos are inspectors represented by the persona that only uses the software once per month. Even though we don’t know why this is happening (see Qualitative Methods and Tools below), the team can start to ideate on ways to make the process easier for infrequent users.
Examples of analytics tools:
Azure Application Insights
Adobe Experience Manager
A/B testing is a form of usability testing that compares design variants to see which performs better. Usually this looks like a certain portion of users seeing Variant A and other users seeing Variant B. It is important that the team has a goal or outcome in mind when setting up A/B testing. For example, which call-to-action button style leads to more applications submitted online: when the button is placed inline with the main heading (H1) or when the button is placed at the bottom of the text body and left-justified?
There are third-party software tools to help with A/B testing, but they can be challenging to use in the federal government because of the type of user data they collect and often the data is not stored in Canada. You can setup your own A/B testing by using Feature Flags in your code. Create a flag for the A instance and the B instance of a feature. Turn the A feature on for a select group of users, turn Feature B on for another group of users and watch the results.
Heatmaps and click tracking are visual representations of user interactions on a website or a product. We can use this data to identify areas where users struggle or show pages or areas of an app that are popular and get a lot of attention. For example, you notice that the tabs on a page get a lot of clicks in a single session, which might prompt you to reorganize the information on the page to reduce the amount of tab jumping.
Microsoft Clarity is a powerful tool for collecting and analyzing heatmap and click-tracking data and it is available at TC by making an Orion Software Request.
Advantages of quantitative methods
Identify areas of a webpage/product where users get stuck (areas for improvement)
Identify areas of a webpage/product that seem to work well (reuse these patterns more)
Easy to collect and analyze data with available tools
No need to schedule time with users
Data is anonymous
Offers reliable and continuous data
Disadvantages of quantitative methods
Doesn’t give us the why behind user behaviour
Data can be manipulated to support biases
May not have enough users for statistical significance
Unable to follow-up any answers given
Limited by the set answers on a survey
Qualitative methods and tools
User interviews and surveys are used to collect data on users' opinions, preferences, motivations, and goals. Where heatmaps might reveal the actions a user takes, interviews and exploratory surveys can help answer why users did specific things.
Interviews are powerful because you can ask follow-up questions and dive deeper into user behaviour. You also have the opportunity to clarify questions and provide context or analogies if the users' are having difficulty understanding the questions. Interviews also take more time to organize and cannot be done asynchronously.
Surveys are often quicker to distribute and users can fill them out in their own time, but the data will not be as comprehensive as interview findings. Similarly, users' might misinterpret questions on a survey and give an unintended answer.
Usability testing gives the team the opportunity to observe users as they interact with a product. A usability testing session involves a moderator prompting a user to complete one or multiple tasks, measuring the task success rate, and asking the user a few subjective questions about their experience. Often times, what users say they do and what they actually do can be quite different, and this is why usability testing can be so effective. The most important thing is to avoid helping the user in any way while they try to complete a task, including asking leading questions or providing guidance on what the user should do next.
Advantages of qualitative methods
Explores attitudes and behaviours
Provides insight into why users behave the way they do
Flexibility
Encourages discussion
Disadvantages of qualitative methods
Can be challenging to schedule time with users
Time consuming
Can be difficult to differentiate patterns from individual behaviour
Need a skilled moderator
How data supports the MAHCD Process
Part of the Service and Digital Group (SDG) mandate is to enable our business partners through service design and building user-centred products. If your stakeholders and business partners come to you with a new project, or new challenges in an existing project, and there are no specific outcomes, it is our job to help them determine specific, measurable, achievable, relevant, and time bound (SMART) outcomes. Starting with specific and measurable outcomes not only aligns the team and stakeholders on a single problem at a time, but also gives us an idea of when something can be considered done, or “close enough”, so that we can move on to the next challenge.
Here is how data plays a key role in each phase of the MAHCD Process.
Discovery
Gather evidence from user research and environment scans so that you understand the users, the processes, and the technologies involved. Use your findings to help define the problem and identify potential opportunities. Start thinking about what success might look like and how that could be measured.
Design
Use the main outcomes and success metrics from the Discovery phase as your North Star while you ideate and brainstorm different solutions. At the start of each session, remind participants of the outcomes and success metrics so that everyone is aligned. If you ever feel like things are getting off track or focus is shifting to other things, bring the focus back to the main outcome.
Implement
It’s time to turn the idea into something tangible, whether it is a new product feature, or perhaps just a change in process.
Confirm the success metrics that you will use to determine if the new idea provides user and/or business value. Next, determine how and when you will gather those metrics. Do we need to schedule usability testing? If so, how often, and how many sessions? Are we collecting quantitative analytics? If so, get benchmark measurements and determine how often we’ll measure to see if we’re moving in the right direction or not.
Test
Does it work the way we thought it would? Use metrics like task completion rate (usability testing) and accessibility test coverage on your code to make sure the solution is usable and accessible.
Evaluate
Get your idea into production as soon as possible so that we can start learning.
Measure the user value – does it help them save time, make less mistakes, reduce overtime?
Measure the business value – does it give us more data to make informed oversight decisions so that we only inspect high-risk devices? Does it reduce the amount of customer support calls that saves the program money?
Start measuring your success metrics and compare against the benchmarks. Compare the findings to the desired outcomes at the beginning of the process to determine if the needle moving in the right direction.
Tips for presenting data
To get the most from the data you collect, you have to share it with others, whether that is your team, clients, or stakeholders. Data is an integral part of telling a compelling story because it grounds the story in truth. For this same reason, be aware that data can sometimes be quite jarring and create an emotional response from the audience – you might be presenting data that disproves a stakeholder’s well-intentioned idea, or perhaps the data means you have to redesign an entire feature that the team just spent 2 months building.
The best approach is to be empathetic to your audience and try to understand how they might react so that you can prepare accordingly. Use data visualization tools such as graphs, charts, and infographics, and accompany the visuals with direct quotes from your user research, drawing correlations to specific business requirements. It is much easier for people to acknowledge their idea might not be the best one when there is explicit data to prove it.
For example, if you are presenting data findings in a sprint review, emphasizing the impact on business requirements and outcomes will speak more to your stakeholders. If you are presenting data to your team, use a similar technique but emphasize the impacts on usability, accessibility, and desirability.
Resources
https://designlab.com/blog/what-is-data-driven-design
https://www.anparresearchltd.com/post/pros-and-cons-of-qualitative-research-vs-quantitative-research
TC Design