Famous A/B testing examples

Here Are 10 Fascinating A/B Testing Examples That Will Blow Your Mind

 

If there’s one thing you need to know about good design practice, it’s that good designers test their work. As a business owner, you should be testing everything that goes in front of a user – websites, landing pages, emails, etc. – if you want the best results.

You wouldn’t just invent a product and send it out into the world without asking a few people if it was a good idea, right?

You shouldn’t assume that you know what will catch someone’s attention or make the most people fill out your sign-up form. That’s why data-driven design can be the most effective.

The things that get the most conversions are counterintuitive! You shouldn’t assume that you know what will catch someone’s attention or make the most people fill out your sign-up form. That’s why data-driven design can be the most effective.

Plus, let’s be honest, you want those conversions.

Here are 10 skinnies on surprising split-testing results you’ll be able to cite at your next networking event.

1.  Groove’s Landing Page Redesign

Groove’s Landing Page Redesign, ab testing examples
Groove’s conversion increased from 2.3% to 4.3%.

This first case study is a great example of how testing and optimization should be done. The team from Groove decided to do an extensive qualitative study. They spoke to their customers on the phone to figure out what words they use, and then they set up an after-signup autoresponder asking them why they signed up.

They used the results to write the landing page copy using their customers’ own words and designed the page only after they finalized the copy.

2. Highrise’s Headline & Subheadline Test

2. Highrise’s Headline & Subheadline Test, ab testing examples

Highrise tested different headline and subheadline combinations to see how it affected their sign-ups. The test performed with Google Analytics Experiments showed that the variation telling visitors that the sign-up is quick produced a 30% increase in clicks. It was also the only variation with an exclamation mark.

3. Humana’s Banner Test

Humana’s Banner Test, ab testing examples
A simpler design plus a stronger CTA led to 433% more clickthroughs.

Humana tested two different banners on their homepage.

The first one was quite cluttered, with a full paragraph of copy and a less noticeable call-to-action. The control was cleaner, with a strong, obvious CTA button. Both banners featured different photos.

The cleaner variation achieved a staggering 433% increase in clickthroughs.

But they weren’t done there! Playing with the microcopy on the button (changing from “Get started now” to “Shop Medicare Plans”), they managed to achieve a further 192% increase in clicks.

4. The Olympic Store Checkout

 The Olympic Store Checkout, ab testing examples
Removing a barrier increased completed purchases by 21.8%.

Typically when shopping online, we’re met with the sign-up or log-in form as we try to check out. This causes unnecessary friction in the purchase process and may cost revenue.

The Olympic Store decided to test a different approach. They let their customers check out without the sign-up, moving the extra step of creating an account to the end of the process, after the check out.

5. Hubspot’s Lead Conversion

Hubspot’s Lead Conversion, ab testing examples
Using an in-line CTA led to a 71% increase in conversions.

This is a very relevant test to all of us who are building an email list (which you should, as well!). What’s better: a sign-up form inside a blog post or a separate sign-up page?

Hubspot decided to test this. They offered the lead magnet The Beginner’s Guide to Inbound Marketing at the end of the posts: on one variation, the form was embedded in the post (in-line CTA), and the other was just a link to a landing page with the form.

The in-link CTA performed 71% better.

6. RummyCircle’s Mobile Facebook Ad

RummyCircle’s Mobile Facebook Ad, ab testing examples
On mobile, ad version B had a 224.7% lower Cost Per Action. And the lower the CPA, the better.

India’s leading rummy site decided to test how differently written Facebook ads affect the sign-ups. Here’s why they tested it: In previous testing with desktop users, the team found engaged users – who commented on the RummyCircle Facebook ad – were more likely to click to gaming site. Commenting, therefore, facilitated lead generation for the site. And, engaged leads converted.

But their mobile users seem to be a lot different. On mobile, the test showed that asking for comments actually decreased conversions to the email list.

7. Google+ Tests a Promo Banner on Mobile

Google+ Tests a Promo Banner on Mobile, ab testing examples
Mobile website users increased by 17%.

Say what you will about Google+, but apparently the social network isn’t going anywhere. In this interesting case study, the Google+ team put the interstitial ads to the test.

Interstitials are the obtrusive full-screen ads that many websites (Pinterest, looking at you!) use to try and convert their mobile website visitors into app downloads. Supposedly, that should improve the user’s experience with the content, but it’s rather the opposite.

69% of people left the mobile website right away, while 9% of the visitors clicked on the “Install” button. After they implemented a nicer, less obtrusive app ad, the 1-day active users on mobile increased by 17%, while the percentage of installs stayed almost untouched.

8. Yuppiechef’s Navigation Test

Yuppiechef’s Navigation Test, ab testing examples
Just removing a navigation led to a 100% increase in conversions.

Removing navigation is one of the tips we recommend to quickly increase conversions, and Yuppiechef’s A/B testing results prove it. The tiny change produced a 100% increase in conversion rate. The reason is probably because it offers fewer distractions to the users.

9. Centraal Beheer Achmea CTA Test

Centraal Beheer Achmea CTA Test, ab testing examples
Adding a link resulted in 244.7% more clicks.

The Acmea’s case study produced a very unlikely result. I normally recommend adding an additional link underneath the call-to-action buttons for the banner-blind.

However, this particular website decided to test a call-to-action with a secondary link that didn’t direct people to completing the offer — instead, it let them share the page to LinkedIn!

Surprisingly, the additional link that some would deem a distraction actually produced 244.7% more clicks on the main call-to-action button!

WhichTestWon explains the reason for this result is likely in the “Hobson’s Choice” effect. The difficult decision of whether or not to click becomes deciding which button to click instead. It’s sort of like when you don’t feel like going to the gym: if you frame it differently – “Will I go to the gym or run today?” – you’re more likely to exercise.

10. Server Density Changes Their Pricing Model

Server Density Changes Their Pricing Model, ab testing examples
Packaging services increased total revenue by 114%.

Server Density is a SaaS company providing hosting and website monitoring. Their initial pricing model relied heavily on the costs. However, when they tested it against a packaged, value-based pricing model, they found out that not only did the overall revenue increase, but also the number of free trial sign-ups decreased, effectively lowering the costs of the “tire-kickers.”

As you can see, sometimes even the tiniest tweak to your design can have huge results. You never know how much business you can drum up if you don’t try… and test. And test again.

30+ Product Management “Best Practices”

https://medium.com/@gclaps/30-product-management-best-practices-9520125ba5ad

30+ Product Management “Best Practices”

Following up on my recent Quora answer, here is a big list of product management best practices, as well as my personal recommendations:

Feature Prioritization

 Why? It’s quick, simple and answers the core problems most companies are faced with — “What features should I get rid of, what should I do more of, and where should I innovate?”.

Requirements Documentation

Why? Mapping solutions (user stories) to core customer goals (job stories) helps ensure you’re building a product people will actually use. When you realize that any goal has many solutions, you can begin to prioritize your effort to get the 80% value out of the 20% of your effort.

Performance Metrics

Why? OKRs set high level goals (objectives) and explicit measurable tasks on how to achieve these (key results). At an abstract level, these can tie to your product roadmap. This enables teams to act autonomously, focus on goals and just generally do more by mitigating the common question of “What should I do next?”.

Software delivery methodologies

Why? If you knew what you needed to build and who for, you’d be making millions in a matter of days. Unfortunately, your amazing idea has probably been built by 10 other companies. And, if it hasn’t, you’ll quickly find that your idea will change over the course of its life because of your market, technology and an evolving customer base. In summary, apply lean and ‘agile’ thinking and you’ll be able to iterate on your idea faster, and hit that sweet spot of product/market fit. The methodology doesn’t matter, what matters is that you stay objective. On a side note, I find that time-boxed iterations can incentivize unexpected ‘shortcuts’ to be taken that can do more harm than good (e.g. trying to push work to ‘done’, and introducing technical debt, because ‘done’ is the metric of success). Try to avoid this.

Product Roadmaps

Why? A simple Priority Bucket list helps you focus on the ‘now’ and ‘next’. It also allows you to keep the ‘Later’ in mind, but (importantly) out of sight. From there, you can fill up your ‘buckets’ with minimum marketable features. Building features with an MVP mindset ensures you don’t build the wrong things upfront, since it forces you to rely on feedback over making long-term assumptions.

Project Delivery Metrics

  • Cycle time => The time you start a task until it’s completed.
  • Lead time => The time a task is created until it’s completed.
  • Burn-down chart => A visualization of the daily progress of an iteration of work.

Why? Using metrics to predict future development timelines usually breaks down. There are just too many variables to account for. By looking at metrics like cycle time and lead time in retrospective, you can quickly (and visually, if you use a time-in-process chart) find and discuss bottlenecks in your process. Or, you can see what what’s working so you can double down on it.

Task Estimation

  • Relative estimates => Flexible, determined by factors you define (e.g. complexity, risk, repetition). Examples include story points & t-shirt sizing.
  • Absolute estimates => Pre-determined, with an absolute time (e.g. 1 day).

Why? Absolute estimates crumble at the hands of projects where you’re doing something for the first time. And that seems to be the case for 99% of tasks. Just logically thinking about it, the likelihood of your upfront ‘guess’ being correct is pretty low at best. On the other hand, relative estimates scale based on a range of proxies you determine (e.g. this is usually risk, repetition, complexity, etc.). If an initially ‘small’ problem becomes really big, you can know that a ‘large’ problem of equal complexity should be really big relative to its first estimated size. While relative estimates aren’t a silver bullet, they’re our best bet at estimating effort for now.

‘Finding’ features

  • Quora answers => For example, product x vs. y posts, feature requests, praise or complaints.
  • Competitor’s feedback forums, testimonials and complains => Find what’s already working and features customers are already requesting.
  • Customer interviews, surveys and calls => Ask customers what’s working and what they want to achieve.
  • Support tickets => Find what customers are already having issues with.
  • Internal team => Other teams in your company interact with your customers and can simply bring new perspectives to the table.

Why? Your intuition will break down at some point. Even then, you should never use it alone in the first place. Bring in as many data sources as possible and make the best product decisions based off that. Whether you succeed or fail, you’ll learn and, over time, make better decisions.

Team improvement

Why? You should be frequently trying to improve your teams by retrospectively looking at task and team performance. It’ll help you fix bottlenecks faster. In parallel, you can “run the board” to find and fix current blockers on your Kanban board ASAP.

Communication tools

Why? Personal preference. Plus, Slack slickly integrates with a bunch of other tools. And, calls to the USA are free with Google Hangouts (for now).

Metrics Every Software Product Manager Should Know

https://blog.aha.io/software-product-management-metrics/

by Brian de Haaff 

15 Metrics Every Software Product Manager Should Know

The world of product management is rapidly changing. It is more data driven than ever before. There is no doubt that data is impacting most jobs. But this is amplified for product managers, especially if they work for an emerging software company. Being a product manager at an early-stage company has never been more challenging.

But if you get the role and product right, it is the best job on Earth.

So, how do you help your software offering stand out in a crowded market? It is straightforward when you think about it. You need a strategy that combines a specific vision with the quantifiable metrics to measure your progress. To be successful, you must know where you are going and if you are getting closer or have arrived.

Metric-driven goals are fundamental to building great products. As the CEO of Aha!, I make it a high priority for our whole team to have goals and track them. You can only improve what you measure. We have been successful in setting up the right metrics for our business and we often outpace them.

But this growth did not happen by accident. We look at how our business is doing against our goals each week and measure how well we are responding to our customers every day. We also speak with hundreds of product managers and teams each week about their strategies and roadmaps. This gives us a sense of how leading teams measure their own progress.

So, now we know that every business needs a vision and clear goals. But how do you know which metrics you should be tracking? Which ones are right for your unique business?

There are hundreds of different metrics that product managers could potentially measure. But all successful teams have a core set of metrics that matter most to them and the nature of their business.

While every business is different, there are some metrics that we believe are important for SaaS companies and product managers to track. We measure them at Aha! and see huge gains from doing so. This list focuses on essentials and is split into three core areas: marketing, customer success, and business operations.

Marketing

Monthly unique visitors
Monthly unique visits are the number of unique individuals visiting your website each month. So, one person visiting the site multiple times will be counted as one unique visitor as long as they use the same device to visit the site each time. Monthly unique visits (UVs) is a standard benchmark for marketing teams. Since this data is readily accessible from third-party websites, it is commonly used for competitive analysis.

Customer acquisition cost (CAC)
Customer acquisition cost, or CAC, is the estimated cost required to gain each new customer. For example, if you spend $1,000 on a campaign which directly results in 10 new customers, the CAC will be $100 per customer. Use this, the annual contract value, and customer lifetime value to understand if your customer acquisition model is profitable and sustainable.

Understanding how much it costs you to acquire new customers is key to scaling a SaaS business profitably. You can also gain a holistic picture of your marketing channels by segmenting CAC by source (organic, paid, email, social).

Organic traffic vs. paid traffic
Organic traffic is a measure of how many people find your website via an unpaid organic search, while paid traffic is how many people are visiting your site from a paid source such as an ad. Measuring traffic by both organic and paid channels is essential to understanding where and how your business is growing. It will also allow you to make better decisions on which marketing campaigns are most valuable.

Customer Success

Conversion rate to customer
The conversion rate to customer is the percentage of potential customers who started a trial and end up converting to paid customers. It is most commonly measured by taking the number of leads or trials (depending on your model) in a given month and dividing by the total number of new customers added during that same month.

Your conversion rate is a benchmark for how well you are doing at turning prospects into buyers. By increasing your conversion rate to customer even a small amount, you can quickly increase your customers and revenue.

Number of support tickets created
The number of support tickets created is a measure of how many customers are requesting help. An increase can be an indicator of more users or could point to an even deeper usability problem. With this data, the team can work to improve self-service options or may choose to add more team members when a heavier volume is expected.

First response time
First response time is an average measure of how long it takes for customer support to respond to a customer or act on a support ticket. For example, if a support request was sent by a customer at 7 a.m. and they received a response by 8 a.m., the first response time for that interaction would be one hour for that day for that customer.

By tracking this metric each day and week, you can easily see areas for improvement and the times when help is most often needed. The first response time is critical to keeping customers happy and engaged with the product.

Time to close a support ticket
The time to close a support ticket is a measure of how long it takes for the support team to completely resolve an issue. This is different than first response time and shows a more holistic perspective on customer satisfaction. No matter how quickly you respond to the original request, the ticket or request will not be closed until the problem has been completely resolved and the customer is satisfied.

Churn
Churn is a measure of what was lost during a given period in terms of customers, dollars, etc. It is important to understand that no matter how good your software is, some customers will naturally cancel each month. So, planning for a healthy amount of cancelations is not a bad thing.

The simplest view of monthly customer churn is calculated by dividing the number of customers lost in a month by the prior month’s total. While it is good to know customer churn, in software companies, it is even more important to know the revenue lost through churn each month. Over time, you can work to reduce not only the number of customers who cancel but also the revenue associated with those lost customers.

Business Operations

Active users
Active users are the number of people using the product. This is another common benchmark used to determine the growth and relative size of a software company’s customer base. Active users do not include past users who have canceled or chosen not to renew.

New monthly recurring revenue (New MRR)
New MRR is the new monthly recurring revenue added in a given month. New MRR only refers to brand-new customers and does not include expansion revenue or upgrades to existing customer accounts. It is a great way to track new revenue growth on a consistent basis over time, as well as measure the amount and size of new customers that are added each month.

Add-on monthly recurring revenue (Add-on MRR)
Add-on MRR is a measure of new monthly recurring revenue attributed to add-ons from existing customers. This could be additional product purchases or additional users added to an account. A healthy software company should be adding new customers each month and expanding relationships with existing customers at the same time. In many cases, add-on MRR is a better indicator of how useful your product is to your customer base. It is a very good sign if they are increasing their investment with you each month.

Total new monthly recurring revenue (Total new MRR)
Total new MRR is the total monthly recurring revenue, which is the total of recurring revenue at the end of a given month. Total new MRR is different from new MRR because it includes add-on and churn (canceled customers). The most straightforward calculation to use is Total new MRR = New MRR + Add-on MRR – Churn MRR.

Measuring total new MRR allows you to roll up every aspect of your customer base from a financial standpoint into a single number that measures net change in revenue. For example, if you see a very high new MRR but a very low total new MRR, it means you are likely losing as many customers as you are adding each month. By adding your total new MRR to your existing MRR, you can quickly calculate your total annual revenue.

Total annual recurring revenue (ARR)
Total ARR or annual recurring revenue is your monthly recurring revenue (MRR) x 12. It is the annual value of recurring revenue from all customers, excluding one-time fees and other variable fees.

Annual contract value (ACV)

Annual contract value is the value of a customer over a 12-month period determined by their billing plan. Your ACV is essential when determining the type of customers you are converting (segmentation) as well as the ROI of your sales and marketing investments. Ideally, your ACV should be more than four times the average cost to acquire that customer.

Paying $4,000 to acquire and sign up a single new customer might sound like a lot, but it would be a wise investment for a company whose ACV is higher than $16,000. That means you would make your money back in the first quarter, assuming your average customer does not churn in that time period.

Lifetime value (LTV)
Lifetime value is the estimated net revenue from the customer over the life of the relationship. You determine the LTV by understanding the average revenue per month and multiplying it by the average lifetime of a customer in months.

These are the core metrics that we track regularly at Aha! There are hundreds of possible data points to capture and study, but we do our best to match the data that brings the greatest insights to the organization with what is needed to continue to rapidly and efficiently grow.

Metrics are a necessary part of any business. Once you get comfortable understanding and applying quantifiable metrics that matter most for your business, you will be better able to spot trends, make decisions, and look ahead with more confidence.

Welcome to life as a data-driven product manager.

Did we miss anything for a rapidly growing SaaS business? What metrics matter most to you?

Measure
Measure

My Product Management Interviews (part 1 of 3)

 

My Product Management Interviews (part 1 of 3)

So I decided to move on from my 5 years at a healthcare startup and start a new chapter in my career.

One of my mentors identified a weakness in my skillset which was the fact that I was too comfortable and getting very good at doing product management the same way for 5 years. He told me he specifically lasts only 2–3 years max at any employer because you need to grow yourself to learn how to build businesses and products in various ways so that you can respond swiftly to any situation you face. The only way to do so is by exposing yourself to new problems you have never solved or been exposed to before and learning from your mistakes/failures.

This advice really resonated with me because I have done nothing but deliberately try to rapidly grow in my overall product development experience and suddenly realized … he was right. I was in cruise-control to an extent and my most recent set of challenges and product road-map items were not conducive to my personal career growth and were merely rinse and repeat features. It was time to move on.


Company 1 Interview process Included:

  • In person 1:1 interviews
  • Mock scenario of “build a product from scratch and there will be 5 people in the room representing various stakeholders”. 60 minute exercise

I was given a large whiteboard and dry erase markers and when the clock starts I’m given the problem to solve via a new product from scratch. I need to go through discovery, design, user story creation and feature roadmap prioritization within 60 minutes.


The objective of the product:

Via the company’s foosball table, introduce a way in which different departments will not only play more foosball but will also play with people from different departments to facilitate cross-departmental interactions.

There are 2 floors of employees and people don’t know each other across floors. The reason this is necessary is because initial research shows facilitating conversation between non-interacting departments greatly increases employee morale and improves company culture perception.

Stakeholder Representation:

  1. Engineering
  2. Design
  3. Executive Leadership
  4. HR

I started with basic questions like “how many foosball tables are available?” or “are there any specific KPIs that have been used for measurement in the past?”…etc. Get the foundational questions out of the way to prevent gotchas later (this shows that you think about how to avoid complexity later and you want to keep a focus on the objective).


Discovery Exercise:

I went through the discovery process by utilizing the OGSM model one of my mentors taught me. Each section of the OGSM was a conversation between myself and the representing stakeholders. I took the assumption that I shouldn’t EVER just make a decision but instead ask questions for the table to answer and I decide when its obvious the consensus has been reached.

OGSM

(O)bjective:

  • Increase cross departmental interactions between both floors by utilizing one foosball table

(G)oals:

  • Provide an incentive for employees to participate and play
  • Reward players that play against new players more than repeat matches
  • Reward players that play against members of non-interacting departments more than interacting departments (example: sales vs account management rewards less than sales vs software engineering)

(S)trategies:

  • Player profiles will be created utilizing the company’s active directory / google directory technology to identify department

— Provide matchmaking capabilities:

  • We agreed that a mobile friendly web-app was the fastest and cheapest approach to development over a native mobile app
  • Facilitate scheduling matchmaking that works best for each person’s calendar
  • Manual matchmaking to challenge opponents via employee name search
  • Auto matchmaking that automatically determines the matchup that provides the highest reward (based on department and non-interaction between departments)
  • Matchmaking notifications and reminders

— Leaderboard Capabilities

  • Publicly display a leaderboard on television screens across the office
  • Display leaderboard by individual player score
  • Display leaderboard by department
  • Admin account required to display leaderboard on televisions

(M)easures:

  • For the sake of time, we collectively agreed the algorithm to identify score weight based on department collectively should be skipped but I brought it up as a technology scope of work to identify as a priority based on the product objective. I mentioned that analytics managers are a great resource for this conversation.

Once discovery was complete (took me about 45 minutes or so) I was given 10 minutes to draw mockups.

Thanks to the OGSM model, I was easily able to draw mockups of how matchmaking features would function and how the leaderboard tv screen would look like as well.

I repeatedly reminded them that I personally like to do discovery side-by-side with a UX designer so that collaboration between both of us is faster and the end goal is crystal clear to both of us and we can catch each other on important “must have” detail.


Once the mockups were drawn based off of the OGSM discovery, I was given another 15 minutes to write up user stories on index cards and decide on prioritization.

I wrote the user stories based off the OGSM list on the whiteboard and decided to group them in 3 categories.

  • Matchmaking
  • TV Screen Leaderboard
  • Profile

I then prioritized the user stories from MVP stage down to iteration.

This picture is of the index cards I wrote as user stories. This picture is prior to engineering review.

I asked the engineering representative a lot of questions and asked for opinions when they came back into the room.

I explained that I did my best to prioritize based on the group conversations but I wanted to review them one last time with my engineering team prior to committing the public version of the priorities.

  1. Get a match to be scheduled (regardless of leaderboard or department integration)
  2. Get a match results to be stored somewhere (regardless of leaderboard or department integration)
  3. Player department and profile creation (once matches are confirmed to be functional for scheduling and tracking via MVP)
  4. Enable users to find other players by name and schedule a match and track their match results
  5. Player notifications (via calendar invite) for scheduled matches
  6. Create leaderboard with an admin account to display scores on screens. Display scores by individual players only first.
  7. Display scores by department
  8. Scoring logic implementation
  9. Scoring algorithm prioritized by non-interacting department introduced
  10. Auto-match by best possible matchup
  11. Player submits times of availability for others to schedule matches towards.

Product Discovery

 

[Infographic] The Step-By-Step Guide to Product Discovery

Product discovery is an important yet often overlooked aspect of product development. Too often, usability is emphasized at the expense of utility. While the former is crucial, it is empty without considering the latter. According to a recent Clutch survey, nearly 70% of app development firms surveyed require a discovery stage before moving forward with a project. Product discovery is a process that helps us make sure we’re not just creating products that are usable, but also useful.

Mobile

At Clearbridge, the way we tackle product discovery encompasses both usability and utility. We look at 4 key areas:

  • Problem Definition
  • Exploring
  • Solutioning
  • Prototyping

We’ll look at each of these areas, as well as the steps within. While processes for product discovery vary from organization to organization, we find that following these steps produces the best results.

Problem Definition

1. Define The Goal

At the basest level, why are we building the product? What is the long term goal? Identify this and write it down. You also want to include assumptions and obstacles. To reach our goal, what has to be true? What assumptions are we making with the product? What might cause us to fail to reach our goal?

2. Map The Process

Identify and map the user journey. For each user, what jobs (actions) must they take to reach the desired goal? List the users on the left, the story ending (goal) on the right, and all the actions in between.

3. Ask The Experts

Once you have mapped the process, you can identify the internal experts you need to talk to. The experts you need to involve will vary depending on what the user journey maps are, but typically they will be employees who use the existing product or are involved in the existing service. They understand what the current process is, what’s working, and what areas need improvement.

4. Write Down The Problems As Opportunities

For each problem, you identify in the existing process or product, write it down as an opportunity statement. For example, if the problem is that it takes two hours to process a customer request, the opportunity can be “how might we reduce processing time for customer requests?”

Exploring

1. Explore Solutions

By now, you have all of your opportunities laid out. Hold a session where you hash out possible ideas and solutions that are related to those opportunities. Essentially, you are coming up with a variety of ways your product and features therein could potentially resolve pain points.

2. Sketch Possible Solutions

Take the ideas you have from the last step on paper. Sketch out each of the solutions to visualize what they would look like in the product.

Solutioning

Storyboarding

1. Choose A Solution

From the solutions you have identified, decide which one is the best for each problem that was mapped out during the Problem Definition phase. There will likely be multiple solutions for each problem; the purpose of this exercise is to determine which is the best.

2. Storyboarding

Map out how the solution will actually look and work in the hands of the end user. This step provides guidance on the user flows for app prototyping, the next phase in the product discovery process.

Prototyping

1. Create Prototype

The app prototyping phase is where you create a visualization of your app. Using the storyboard as a basis, build an interactive and clickable sample of the product experience to demonstrate how it will work (we also like to create mockups to provide a polished and branded representation of the product composition).

2. Validate

Once you have a working prototype created, validate the product by conducting user testing. Organize a focus group to collect feedback; your focus group can consist of internal team members or people you’ve located through a user testing tool/service. Collect feedback on your app prototype – how people interact with it, the issues they are running into, what they say about the experience, etc. – and use this to guide the direction of the product.

Share this Image On Your Site