How Google Analytics Uses Cookies To Identify Users

By Kathleen Harvey /August 17, 2017

Let’s talk about cookies and Google Analytics. Warning: this blog post does not talk about edible cookies. For recipes, go to the recipe section of our website. Just kidding, there’s nothing you can eat there either. It is a great resource for Google Tag Manager though!

There are many different types of cookies, edible and non-edible, but let’s address the cookie referenced in the question – the Google Analytics cookie.

The Basics of Cookies

A cookie is a small bit of information that gets stored on your computer. Cookies are browser-specific, which means Chrome and Firefox will not be able to see each other’s cookies.

Cookies are site-specific, which means that will not be able to access the cookies that you have saved on

The Basics of Google Analytics Cookies

Let’s review how and why Google Analytics uses cookies. We’ve written some more detailed posts about this topic, so I’ll link to those when appropriate. At a high-level, here’s what you need to understand.

All versions of Google Analytics tracking that you can embed on your website use cookies to store and remember valuable pieces of information. Today, we’ll focus on the Universal Analytics implementation from Google Analytics, which really only has one cookie – the persistent _ga cookie.

The _ga cookie stores one valuable piece of information: your Client ID.

It looks something like this:


This Client ID represents YOU. You are a User, and this is how Google Analytics will recognize and refer to you (behind the scenes.)

Note: this post focuses in on cookies used to represent Users. There is a greatway to identify users on your site that have logged-in with a User ID that gets passed to Google Analytics. For more details, check out the section at the bottom!

With a default, most basic implementation, when a user arrives on your website, the Google Analytics code executes and looks to see if there is a _ga cookie already present. If there is one, great! If there’s not, it will randomly generate a new Client ID for the new user.

This Client ID is in the form of four sets of numbers that are generated and then stored in a cookie on that user’s browser and computer.

What Does This Number Mean?

The Client ID is made up a few different numbers that each mean something different.

The first number is fixed at 1, which represents the version of the cookie format that’s being used.

The second number, where in my example above is a number 2, is dependent on where the cookie is set. This is actually the number of dots in the domain name that the cookie is set to (e.g. = 1, = 2).

The third set of numbers is randomly generated to identify different users. (Technically, a randomly generated unsigned 32 bit integer, or anything between 1 – 2,147,483,647.)

The last set of numbers is a timestamp of when a user first visited a site. This timestamp is rounded to the nearest second (not millisecond) of the users first visit.

The _ga cookie is used to uniquely identify users, specifically with the third and fourth set of numbers explained above. Because of this random set of numbers, users can be identified when they come back to the site.

Google doesn’t necessarily know who a user is, and for the sake of web analytics, it uses cookies to help identify and separate unique users from each other. All of the behavior that you record on your website, Pageviews, Events, Transactions, etc. – everything you send into Google Analytics includes that Client ID so that Google Analytics can piece together a user’s history on your website.

Each piece of information that you send is a Hit, and has a Client ID attached.

Google Analytics then looks for hits that have the SAME Client ID, and it connects hits that occur during the same time period into Sesssions.

User, or a unique Client ID, will have anywhere from one to many sessions that are associated with a particular user.

Emily did a great job explaining this in her post: Understanding Scope in Google Analytics Reporting.

Here’s a modified image from her post. In this image, the “cid” is the name of the parameter that Google Analytics uses to store and identify the “Client ID.” This value comes from your cookies.

There’s a lot that could be covered about cookies – here are some quick hits!

How Long Does It Last?

The _ga cookie, by default, lasts for two years of inactivity. Every time a user visits your site, this extends the expiration to two years from the latest date. You can adjust this if necessary!

What About Clearing Cookies?

A hah! You’ve touched on a weakness of using cookies. There are, however, limitations. A user can clear his or her cookies at any time. If a user visits your site and sends traffic info to Google Analytics with one Client ID, then at some clears their cookies and returns to your site, they’ll get a brand new cookie and Client ID, and Google Analytics will treat that user as a New User.

You need to remember that the metric Users in the default Google Analytics reports does not refer to specific individuals, but rather, specific Client IDs, which can change for many reasons.

How About Different Browsers?

Client ID is browser-specific so it is not passed to different browsers on the same device, like two different browsers on an individual’s computer.

How About Different Devices?

Users’ cookies are not shared across devices. Different browsers or devices will result in different cookies and therefore different users. How many browsers do you use to access the internet? Do you ever visit the same sites on different devices? You can spot the problem here.

But I’m Logged into Chrome…

It is possible to create a profile on Chrome with your login. You can have a personalized homepage, sync bookmarks, and have multiple users on the same computer. Unfortunately, the cookie is not passed between these logged in sessions and therefore a single user logged into Chrome on different devices will be seen as two users. Perhaps someday it will be possible to track logged in Chrome users as the same person, but until then, there’s not much you can do!

What About Subdomains and Cross-Domains?

Remember that cookies are site-specific. If you have either Subdomains or Cross-Domains that you are tracking together in Google Analytics, then you need to verify the following two parts.

The default Google Analytics implementation is designed to work across subdomains automatically. (Remember, two subdomains would look like: “” and “”) So if someone travels between those two sites, they’ll keep the same cookies. However, many people don’t use the default implementation from Google Analytics, or they haven’t updated in years, or they’re using Google Tag Manager and haven’t thought about this issue. Read more about subdomain tracking.

Cross-domains are a totally different animal. (Remember that cross-domains would look like: “” and “”) In this case, your cookies will absolutely not be shared between the sites, unless you set up Cross-Domain tracking with Google Analytics. Read more: Do I Need Cross-Domain Tracking?

Using the Client ID For Troubleshooting

Lastly – knowing and understanding about the Client ID can be very helpful in troubleshooting common issues like subdomain tracking errors, cross-domain tracking, Iframes … you name it.

Check out a few posts:

Cookies are useful for a number of reasons and are the backbone of most web analytics tracking. Understanding how they work and potential downsides of the basic Google Analytics tracking can be helpful in identifying tracking errors and better describing your data from Google Analytics.

There is a newish feature in Google Analytics that is designed to give you even better information – the User ID. Where cookies are generated randomly and are used to represent anonymous visitors, the User ID is handy for sites where you actually know who the person is. In this case, you can tell pass a non-personally-identifiable identifier to Google Analytics and that can be used to stitch sessions and users across browsers and devices!

More on User ID in Google Analytics:



View at

View at

Go to the profile of Roman Pichler



Scrum is a simple framework with three roles: product owner, development team, and Scrum Master. Each role provides a distinct type of leadership. As the product owner, you lead the product and are responsible for its overall success. The cross-functional development team makes the design and technology decisions; and the Scrum Master guides process and organisational change, as the following picture shows.

scrum product team.jpg

MVPM: Minimum Viable Product Manager

MVPM: Minimum Viable Product Manager

You’ve probably seen this diagram before. It elegantly shows that product management is the intersection of a diverse skill set.

Originally from

Its simplicity has made it one of the most successful product management memes out there, and it’s done good things for the discipline.

Long ago, as a young PM padawan, it helped me realize that I needed to structure my learning for breadth. What it didn’t tell me, however, was where to focus — I started trying to learn everything, and in hindsight that was a mistake.

There isn’t enough time on this Earth to learn everything you could about those three circles, so as helpful as this diagram is, it ends up impractical.

Yea… not really helpful

What would have been far more helpful was to know what actually comprises that intersection:

That intersection is what I call the Minimum Viable Product Manager (MVPM), and it defines a set of skills or knowledge that are useful to be an effective generalist product manager, one who can work on almost any problem.

MVPM in no way implies that you need to achieve mastery of its skills to be effective, which is both impractical and counterproductive for someone starting out. Instead, view it as a syllabus of sorts for the course in product management that doesn’t exist.

I write this for my younger self, for new product managers, and for more experienced PMs still looking to level up. To maintain some symmetry with the diagram, skills are divided into sections for each discipline. I cover three key concepts/skills to focus on, and one that you really shouldn’t focus on. As much as possible, it’s in plain language and is written for someone who’s approaching any of the subjects cold.

1. The Stack

When engineers refer to ‘the stack’, they’re talking about the layers of technologies that are used to provide functionality to your product (i.e. make the thing work). From the moment a customer loads your landing page to when they delete their account, the technologies in the stack handle everything.

Fastest way to learn — Ask an engineer to take you through the stack at a high level. Write down the names of each technology. Quick Googling of those terms will teach you some of the high level benefits and trade-offs of each technology chosen, and how they work in harmony together. Stay at a high level because you can fall into the rabbit hole easily (add “trade offs + benefits + vs” to your search query)

How does this make you a better PM? — When engineers are discussing how to build something, terminology flies around the room. Knowing the stack means you can at least follow along, and over time you’ll begin to understand what depth in the stack they’re referring to. Generally, the more layers in the stack they need to touch, or the deeper the layer, the more complicated and risky a change will be. Knowing this may push you to re-consider a different way to solve the problem.

2. System Architecture

If the stack represents what technologies are being used, system architecture represents how those technologies are structured to work together to deliver the product. Whereas the stack is mostly about raw technical capability, the architecture of a product incorporates the customer’s intended behaviour in its design.

Fastest way to learn — Ask an engineer to draw you the architecture. You’ll get something like this:

“Why are there only two of the thing called triple store?”

First, don’t panic. Ask them to walk you through what each component (box) in the system does. Some will handle internet requests, some will house the ‘business logic’, others still will hold the data that is saved (cylinders).

Second, believe it or not, this is very useful for you.

How does this make you a better PM?— When you understand the architecture, you start to think of your product like a system, which is generally how engineers will as well. Having an understanding of how each component in the system contributes to the whole helps you make better decisions and trade offs.

Generally, the components in the system that have the most connections are the most complicated to change because so many others rely on them for data or functionality. The more components you have to change in order to complete your build, the more dependencies you have, and the harder the project will be to execute.

In larger companies, the number of components you touch is often synonymous with the number of teams/groups you need to interact with, and the more alignment you’ll need to gain to execute a project.

3. The Data Model and its APIs

A data model organizes information used by your product and standardizes how pieces of that information relate to one another. By ‘information’, we’re really talking about things like Users, Products, and Credit Cards, which collectively are called entities. These entities can relate to each other in certain, structured ways; for example a User can have many Products, but only one Credit Card.

The data model is closely related to the system architecture in that certain entities ‘live’ in certain components. Your Users model may live in component A and so might the Products data, but because of its sensitivity, Credit Cardslive in component B. If your feature needs to show which Users own a product in a list, that’s pretty easy since they live in the same component. But if you need to know which of those users have a credit card stored, then component A needs a connection to component B in order to share the data. That’s harder, and to accomplish it, they need an API (application programming interface).

APIs are built on top of the data model and represent how any two components talk to each other and exchange information about their underlying models. Importantly, APIs also let you talk to external components. When you call an Uber from Google maps, the Google maps app is talking to a component from Uber. Most applications have Public APIs and Private APIs, which are usable by anyone on the internet, or just those you specify, respectively. Knowing your public APIs are critical to understanding how your product can interact with the outside world.

Fastest way to learn — You should focus first on gaining an understanding of your Public APIs. They’re usually easy to find, and often live on your website’s developer docs. When you see them, you’ll see code and that may or may not freak you out depending on your background, but if the documentation is half-way decent, that should be irrelevant and you should be able to read it fine. The beauty of studying your APIs is that they often represent most of your underlying data model, so you get two birds with one stone.

How does this make you a better PM? — Knowing your data model expands your ability to know what information you can utilize to create better products, and how hard it may be to access that information. Knowing your APIs mean you understand what types of information partners and third party developers can get from your application, and therefore what types of integrations are possible. The extensibility of software is one of it’s most valuable properties, and being able to work well with other products (that your customers are potentially using everyday) is quickly becoming table stakes.

4. Where you shouldn’t focus

Programming. Don’t get me wrong, I love programming and it does help you be better, but unless it’s a highly technical product, you just don’t need it to be an effective PM. If you find yourself coding as a PM, you may need to ask yourself if you’re actually doing high leverage work, or you’re not sure what else you should be doing. That being said, I think it’s a very worthwhile and fun experience to have built at least one app and shipped it to a production environment.

1. Project Management

Boring, I know. I hate it too, but it is really important. If you can’t run a project well you’re never going to be a good PM. Period.

Fastest way to learn — This one is hard. To be an effective project manager takes a lot of experience and time. You can read up all you want, but at the end of the day it’s a human behaviour problem. It takes time to learn about the spectrum of personalities you’ll end up working with, and any advice you’ll find on how to approach it is often subjective to your personality, too.

That being said, there are some software specific things you can invest in to accelerate your learning curve:

  1. Understand the basics of product development so that you can empathize with your team. Learn about version control (Git), collaborative programming (GitHub), Quality Assurance processes, and at a high level how and when code gets deployed to users in your product.
  2. Learn about the common problems that plague software teams, and the processes others have developed to try and solve them. You’ll come across things like agile, scrum and kanban. There is value in learning the philosophies behind their approaches, whether your company uses them or not.
  3. Understand decision making at your company, and map out your stakeholders. These are often your customers, your boss, your team members’ bosses, and other PMs. Find a way to ensure that everyone is aware of the status and direction a project is going at a level contextual to what they care about (you’ll have to find that out too).

How does this make you a better PM? — You’ll get more shit done with your team, and people will enjoy working with you because everyone hates a poorly managed project.

2. Modelling Impact

Things that aren’t measured rarely get done well. Every product should have quantitative goals that are tied to it’s ultimate success, basic things like user growth, feature adoption, revenue, etc.

When your team is debating the highest leverage thing you could build next, it’s important that you can develop a model of how the product will move the dial on those metrics.

Fastest way to learn — It’s time to get your spreadsheet on. A good model clearly shows two things:

The unit economics of a product and the assumptions that create them:

  • How much does it cost to acquire a new customer?
  • How much does it cost to serve the product?
  • How much does a conversion move the needle on your goal?

The forecasted impact and the assumptions that create them:

  • How much does this product move the needle over the next year? The next three?
  • How many people will we need to hire to enhance and support it?
  • How are market forces like cost reductions, inflation, and competition accounted for in the long term
Gotta love that fake model growth rate $$$

How does this make you a better PM? — The exercise of building a model for your product is a great way to test your instinctual assumptions and ensure that your product has enough potential to make it worth doing. It makes your job easier too; by enabling you to justify projects in a way that resonates with your stakeholders, and by easily enabling you to compare the opportunity cost with other projects you could be doing.

3. Gather & Analyze Data

Being able to independently gather data is vital to making quick decisions. For all but the most involved analyses, relying on someone else to get data for you is not only an inefficient use of their time, but it also doesn’t lead to insights, because anyone who’s been an analyst before knows that insights come through iterative exploration of data, not some perfect report you dream up.

It also reduces your ability to make data-informed decisions when they matter. Almost everyday, a decision about how a product should behave in a certain scenario will pop up, and having data to support a decision makes it easy for you and your team to feel confident in the right direction.

Fastest way to learn — Your goal is data independence . Whether you need to write SQL queries or use a drag and drop interface depends on the data infrastructure at your company. Regardless of what it is, you need to invest in learning the tools available to you. Google them.

How does this make you a better PM? — When data is easily accessible to you and you’re comfortable getting it, you will use it more and it will enable you to be more iterative. Whether you’re considering what to build next, or you’re seeing how your launch is doing, you will build a reflex to use data as an important input into your decision making — and better products will result.

4. Where you shouldn’t focus

Take this from someone with a business degree – don’t waste your time making strategic business cases, 3 year plans, and other MBA artifacts. I won’t go as far as to call it bullshit, but it’s not the way to succeed in software. Understand the vision, find a problem worth solving to achieve it, build a hypothesis to solve it, and then validate it as quickly as you can with real customers. Rinse and repeat.

1. Know the design patterns of your product

Most products develop design patterns over time, whether planned or not. Patterns are the consistent use of the same visual and interactive components in your product. All text on buttons are font-size 25px, all forms must be no more than 3 fields, every time an error happens we will make an explosion sound and send the user an email with the details — these are all patterns.

Random example, Material Design on

Knowing your product’s patterns are critical in understanding how users map your product in their minds, and how they can effectively be given new features over time. If you usually give users a green button saying “Add New Feature” when you launch something, and this time you switch to an orange button that says “Blow your mind”, you will confuse the shit out of people.

As a product grows, consistent use of patterns becomes even more important because they enable teams to work independently of each other but still build a product that feels cohesive.

Design patterns are also usually developed in harmony with technical patterns, like style guides and components, which are basically libraries of re-useable code that speed up teams because they don’t have to re-design or re-implement the same functionality again.

Fastest way to learn — Talk to your designer, they should know these patterns cold and (hopefully) be able to give you links to a style guide. Also talk to your front-end engineers, they can equivalently give you links to a pattern library.

How does this make you a better PM? — Plainly put, designing products on pattern is far easier and faster. They let you stand on the shoulders of design decisions your team made in the past, decisions that result in a product that’s easier for customers to use. If you ever need to break existing patterns – to be clear there sometimes good reasons to do so – be prepared with very good reasons why it’s necessary for the long term health of the product.

2. Know how to execute user experience research

PMs are supposed to be the voice of the customer. If you don’t understand your users, you will never build great products. From interviewing a single person face to face, to quantitatively analyzing millions of user actions, understanding the basics of good research are imperative to your job.

Fastest way to learn — Effective research is a very big field, so instead of sending you into the rabbit hole, I recommend you focus on understand the following:

  • Understand Sample Size and how to calculate statistical significance
  • How to normalize your sample and why that’s important
  • How to ask unbiased, non-leading questions in surveys and interviews
  • How to synthesize results and avoid bad conclusions

How does this make you a better PM? — By consistently and frequently testing your product with customers, you can take away a lot of the guesswork (and risk) in product development. Before a project even starts, you should be testing to validate that the problem you think you’re trying to solve really is one. While you’re designing and building, you should be testing that the product’s design is easy to use and is likely to solve the customer problem. After launching, you should be validating that the problem was solved for the customers you wanted to solve it for.

3. Know how to prototype your ideas

Prototyping in this context means being able to create visual mockups that can effectively express your ideas. They need to be good enough so that you can:

Communicate a product concept clearly

It is incredibly difficult to communicate a product experience verbally or in writing. A prototype, something people can see and preferably interact with (you can do this without code), is 10x more effective.

There are two reasons for this: first, it forces the articulation of the product in terms of what customers will actually interact with, and second, because humans naturally think visually, a prototype levels the playing field so that everyone on the team can speak the same language and give their points of view effectively.

Unblock a team when design is behind or absent

In most projects, it is important that the product’s design is ahead of development. Designers try to “stay ahead of the devs” because the switching costs for developers is much higher once they start building the product in a particular direction.

Because so much of product design is iterative and done in parallel with the build, when there’s a setback (e.g. user research says the design is not effective) design can quickly fall behind. It’s in those situations that a PM must be able to roll up her sleeves and be a “design intern” for the lead designer, helping to push pixels and ship mockups so the engineers can continue the build.

Fastest way to learn — I won’t spend time justifying this, but just start using Sketch, it’s like MS paint and Photoshop had a baby and it’s awesome.

How does this make you a better PM? — By prototyping and showing people what you’re thinking instead assuming they understand, you will get better feedback from your team on your ideas, and reduce the risk that mis-communication leads to wasted effort. Also, it’s nice to be able to actually produce something tangible once in a while.

4. Where you shouldn’t focus

Don’t focus on being a great visual designer. Your ability to make a slick looking interface is redundant and disempowering to someone who’s spent a career learning the deep craft that is product design. Unless you’re design savant (to be clear there are some), you also probably just think you’re good, and you actually suck.


I don’t want to trivialize learning all this stuff. It’s not easy, and it takes a lot of time, so tackle it bit by bit and enjoy what you’re learningI hope this helps you be a little more efficient in your quest to be a great, if minimally viable, product manager.

12 things product managers should do in their first 30 days at a new company

Ken Norton is a product partner at Google Ventures where he advises startups on product management and also helps organize workshops.

Congratulations, a product has found its product manager. Perhaps you’re joining a small startup, or maybe you have a new project in a big company. How you approach your first 30 days will make a tremendous difference, setting you up for success or struggle.

Here are some tips for how to approach that first month. Emphasize these three areas: People, Product, and Personal:


1. Set clear expectations with the CEO or your manager

You’ve been hired to fill a hole, and there will be organizational pressure for you to contribute immediately. Review your objectives with the CEO to make sure they have the right expectations for what you’ll be doing. Your primary goal for the first month is to effectively join a team.

2. Schedule a one-on-one with everyone on the team

Depending on the size of the company, this may take a few hours or the entire first month. Find time to meet with everyone individually.

I prefer walking one-on-ones – there’s something focusing and invigorating about walking together and looking ahead as opposed to staring at each other across from a conference room table.

3. Ask everyone this question

“What can I do to make your life easier?”

You’re showing that you’ve here to help, not to command. How they answer is almost as important as what they say. You’ll get a true indication for how they perceive the PM role, and what they need from you.

4. Take a load off their back

Hopefully you’ll walk away from the meeting with something you can take from them that’s cutting into their productivity. Maybe an engineer would love for you to take over bug triage. Or weekly Costco runs.


5. Schedule time with your lead engineer to walk through the product’s technical architecture, in deep detail

Don’t shy away from asking questions or drilling down on things that didn’t quite make sense.

Too often PMs try to impress their engineers with their technical acumen, but in my experience engineers are much more impressed with PMs who are willing to ask questions and say “I don’t understand that.”

6. Resist the urge to jump in and start changing things

You’re going to want to start making changes to the product and the development process. I recommend holding back a bit in the beginning.

Your ideas and thoughts will be better formed after you’ve had a chance to settle in, gain credibility, and absorb all of the nuances. You’ll also be demonstrating that you’re a listener.

7. Get in front of your users

Spend a solid chunk of your early days with your users. Go on sales calls and customer visits. Take some support tickets. Get on the forums, engage with users on Twitter.

8. Fix something

I’m a firm believer in PMs being technical, and an excellent starter project is to fix a bug or launch a minuscule feature on your own.

Set up a dev environment and ask for something bite-sized that you can do. Ask for help and be considerate of your time and the team’s – you’re a PM after all, not a full-time engineer.


9. Read everything, and write it if it isn’t already written

Read anything you can get your hands on – old OKRs, specs, design documents, wiki pages. As you find documentation that is missing or out of date, add it. Take time to write up what you’ve learned and how things can be improved for the next hire.

10. Set some personal goals

Changing jobs can make you feel heroic about some things and woefully clueless about others. This is a chance to set some personal development goals. I like to keep it simple:

  • What is one thing you do really well that you want to continue to do? How are you going to stay in the habit of doing that?
  • What is one thing you need to improve at? What steps are you going to take to get better, and how are you going to measure your progress?

11. Configure your life support systems

Get all your tools and devices in order. Install the software you need. Create email filters. Set up Google News Alerts for your product and your competitors’ products.

12. Have fun!

Do you have other tips to share on how to be a better product manager? Let’s discuss in the comments below.

12 A/B Split Testing Mistakes I See Businesses Make All The Time

12 A/B Split Testing Mistakes I See Businesses Make All The Time


A/B testing is fun. With so many easy-to-use tools around, anyone can (and should) do it. However, there’s actually more to it than just setting up a test. Tons of companies are wasting their time and money by making these 12 mistakes.

Here are the top mistakes I see again and again. Are you guilty of making these mistakes? Read and find out.

#1: A/B tests are called early

Statistical significance is what tells you whether version A is actually better than version B—if the sample size is large enough. 5o% statistical significance is a coin toss. If you’re calling tests at 50%, you should change your profession. And no, 75% statistical confidence is not good enough either.

Any seasoned tester has had plenty of experiences where a “winning” variation at 80% confidence ends up losing bad after giving it a chance (read: more traffic).

What about 90%? Come on, that’s pretty good!

Nope. Not good enough. You’re performing a science experiment here. Yes, you want it to be true. You want that 90% to win, but more important than having a “declared winner” is getting to the truth.



Image credit

As an optimizer, your job is to figure out the truth. You have to put your ego aside. It’s very human to get attached to your hypothesis or design treatment, and it can hurt when your best hypotheses end up not being significantly different. Been there, done that. Truth above all, or it all loses meaning.

A very common scenario, even for companies that test a lot: they run one test after another for 12 months and have many tests they declare as winners and roll out. A year later the conversion rate of their site is the same as it was when they started. Happens all the damn time.

Why? Because tests are called too early and/or sample sizes are too small. You should not call tests before you’ve reached 95% or higher. 95% means that there’s only a 5% chance that the results are a complete fluke. A/B split testing tools like Optimizely or VWO both tend to call tests too early: their minimum sample sizes are way too small.

Here’s what Optimizely tells you:optimizelybsA sample size of 100 visitors per variation is not enough. Optimizley leads many people to call tests early and doesn’t have a setting where you may change the minimum sample size needed before declaring a winner.

VWO has a sample size feature, but their default is incredibly low. You can configure it in the test settings:

vwoConspiracy theorists say VWO and Optimizely do it on purpose to generate excitement about testing so users keep on paying them. Not sure that’s true, but they really should stop calling tests early. Here’s an example I’ve used before. Two days after starting a test these were the results:
The variation I built was losing bad—by more than 89% (and no overlap in the margin of error). Some tools would already call it and say statistical significance was 100%. The software I used said Variation 1 has 0% chance to beat Control. My client was ready to call it quits.

However since the sample size here was too small (only a little over 100 visits per variation) I persisted and this is what it looked like 10 days later:That’s right, the variation that had 0% chance of beating control was now winning with 95% confidence.

Watch out for A/B testing tools “calling it early” and always double check the numbers. The worst thing you can do is have confidence in data that’s actually inaccurate. That’s going to lose you money and quite possibly waste months of work.

How big of a sample size do I need?

You don’t want to make conclusions based on a small sample size. A good ballpark is to aim for at least 350-400 conversions per variation (can be less in certain circumstances – like when the discrepancy between control and treatment is very large). BUT – magic numbers don’t exist. Don’t get stuck with a number – this is science, not magic.

You NEED TO calculate the actual needed sample size ahead of time, using sample size calculators like this or other similar ones. This is a pretty useful tool for understanding the relation between uplift percentages and needed sample sizes:

What if I have 350 conversions per variation, and confidence is still not 95% (or higher)?

If the needed sample size has been achieved, this means there is no significant difference between the variations. Check the test results across segments to see if significance was achieved in one segment or another (great insights lie always in the segments – but you also need enough sample size for each segment). In any case, you need to improve your hypothesis and run a new test.

#2: Tests are not run for full weeks

Let’s say you have a high traffic site. You achieve 98% confidence and 250 conversions per variation in 3 days. Is the test done? Nope.

We need to rule out seasonality and test for full weeks. Did you start the test on Monday? Then you need to end it on a Monday as well. Why? Because your conversion rate can vary greatly depending on the day of the week.

So if you don’t test a full week at a time, you’re again skewing your results.  Run a conversions per day of the week report on your site, see how much fluctuation there is. Here’s an example:dayoftheweekWhat do you see here? Thursdays make 2x more money than Saturdays and Sundays, and the conversion rate on Thursdays is almost 2x better than on a Saturday.

If we didn’t test for full weeks, the results would be inaccurate. So this is what you must always do: run tests for 7 days at a time. If confidence is not achieved within the first 7 days, run it another 7 days. If it’s not achieved with 14 days, run it another 7 days.

Of course, first of all you need to run your tests for a minimum of 2 weeks anyway (my personal minimum is 4 weeks, since 2 weeks is often inaccurate), and then apply the 7 day rule.

The only time when you can break this rule is when your historical data says with confidence that every single day the conversion rate is the same. But it’s better to test 1 week at a time even then.

Always pay attention to external factors

Is it Christmas? Your winning test during the holidays might not be a winner in January. If you have tests that win during shopping seasons like Christmas, you definitely want to run repeat tests on them once the shopping season is over. Are you doing a lot of TV advertising or running other massive campaigns? That may also skew your results. You need to be aware of what your company is doing.

External factors definitely affect your test results. When in doubt, run a follow-up test.

#3: A/B split testing is done even when they don’t even have traffic (or conversions)

If you do 1 to 2 sales per month, and run a test where B converts 15% better than A – how would you know? Nothing changes!

I love A/B split testing as much as the next guy, but it’s not something you should use for conversion optimization when you have very little traffic. The reason is that even if version B is much better, it might take many months to achieve statistical significance.

So if your test took 5 months to run, you wasted a lot of money. Instead, you should go for massive, radical changes – and just switch to B. No testing, just switch – and watch your bank account. The idea here is that you’re going for massive lifts – like 50% or 100%. And you should notice that kind of an impact on your bank account (or in the number of incoming leads) right away. Time is money. Don’t waste time waiting for a test result that takes many months.

#4: Tests are not based on a hypothesis

I like spaghetti. But spaghetti testing (throw it against the wall, see if it sticks) not so much. It’s when you test random ideas just to see what works. Testing random ideas comes at a huge expense—you’re wasting precious time and traffic. Never do that. You need to have a hypothesis. What’s a hypothesis?

hypothesis is a proposed statement made on the basis of limited evidence that can be proved or disproved and is used as a starting point for further investigation.

And this shouldn’t be a spaghetti hypothesis either (crafting a random statement). You need to complete proper conversion research to discover where the problems lie, and then perform analysis to figure out what the problems might be, ultimately coming up with a hypothesis for overcoming the site’s problems.

If you test A vs. B without a clear hypothesis, and B wins by 15%, that’s nice, but what have you learned from this? Nothing. What’s even more important is what we learned about the audience. That helps us improve our customer theory and come up with even better tests.

#5: Test data is not sent to Google Analytics

Averages lie, always remember that. If A beats B by 10%, that’s not the full picture. You need to segment the test data, that’s where the insights lie.

While Optimizely has some built-in segmentation of results, it’s still no match to what you can do within Google Analytics. You need to send your test data to Google Analytics and segment it. If you use Visual Website Optimizer, they have a nice global setting for tests, so the integration is automatically turned on for each test you run.

Set it and forget it:inte  Optimizely makes you suffer for whatever stupid reason. They make you switch on the integration for each test separately.

They should know that people are not robots and sometimes forget. Guys, please make a global setting for it. So what happens here is that they send the test info into Google Analytics as custom variables. You can run advanced segments and custom reports on it. It’s super useful, and it’s how you can actually learn from A/B tests (including losing and no-difference tests).


But Monetate – which should be a class above the other two services, since it costs way more, is not even able to send custom reports. Ridiculous, I know. They can only send test data as events.monetate  So in order to get more useful data, create advanced segments for each variation and create a new segment based on the event label:monetateAnd you can check whatever metrics in GA with a segment for each variation applied:segBottom line: always send your test data to Google Analytics. And segment the crap out of the results.

#6: Precious time and traffic are wasted on stupid tests

So you’re testing colors, huh? Stop.

There is no best color, it’s always about visual hierarchy. Sure you can find tests online where somebody found gains via testing colors, but they’re all no brainers. Don’t waste time on testing no brainers, just implement. You don’t have enough traffic, nobody does. Use your traffic on high-impact stuff. Test data-driven hypotheses.

#7: They give up after the first test fails

You set up a test, and it failed to produce a lift. Oh well. Let’s try running tests on another page?

Not so fast! Most first tests fail. It’s true. I know you’re impatient, so am I, but the truth is iterative testing is where its at. You run a test, learn from it, and improve your customer theory and hypotheses. Run a follow-up test, learn from it, and improve your hypotheses. Run a follow-up test, and so on.

Here’s a case study where it took 6 tests (testing the same page) to achieve the kind of lift we were happy with. That’s what real testing life is like. People who approve testing budgets—your bosses, your clients—need to know this.

If the expectation is that the first test will knock it out of the ballpark, money will get wasted and people will get fired. Doesn’t have to be that way. It can be lots of money for everyone instead. Just run iterative tests. That’s where the money is.

#8: They don’t understand false positives

Statistical significance is not the only thing to pay attention to. You need to understand false positives too. Impatient testers will want to skip A/B testing, and move on to A/B/C/D/E/F/G/H testing. Yeah, now we’re talking!

Or why stop here, Google tested 41 shades of blue! But that’s not a good idea. The more variations you test against each other, the higher the chance of a false positive. In the case of 41 shades of blue, even at 95% confidence level the chance of a false positive is 88%.

Watch this video, you’ll learn a thing or three:

Main takeaway: don’t test too many variations at once. And it’s better to do simple A/B testing anyway, you’ll get results faster, and you’ll learn faster—improving your hypothesis sooner.

#9: They’re running multiple tests at the same time with overlapping traffic

You found a way to cut corners by running multiple tests at the same time. One on the product page, one on the cart page, one on the home page (while measuring the same goal). Saving time, right?

This may skew the results if you’re not careful. It’s actually likely to be fine unless you suspect strong interactions between tests, and there’s large overlap of traffic between tests. Thing get more tricky if interactions and traffic overlap are likely to be there.

If you want to test a new version of several layouts in the same flow at once—for instance running tests on all 3 steps of your checkout—you might be better off using multi-page experiments or MVT to measure interactions, and do attribution properly.

If you decide to run A/B tests with overlapping traffic, keep in mind even distribution. Traffic should be split evenly, always. If you test product page A vs B, and checkout page C vs D, you need to make sure that traffic from B is split 50/50 between C and D (e.g. as opposed to 25/75).

#10: They’re ignoring small gains

Your treatment beat the control by 4%. “Bhh, that’s way too small of a gain! I won’t even bother to implement it”, I’ve heard people say.

Here’s the thing. If your site is pretty good, you’re not going to get massive lifts all the time. In fact, massive lifts are very rare. If your site is crap, it’s easy to run tests that get a 50% lift all the time. But even that will run out.

Most winning tests are going to give small gains—1%, 5%, 8%. Sometimes, a  1% lift can result in millions of dollars in revenue. It all depends on the absolute numbers we’re dealing with. But the main point in this: you need to look at it from a 12-month perspective.

One test is just one test. You’re going to do many, many tests. If you increase your conversion rate 5% each month, that’s going to be an 80% lift over 12 months. That’s compounding interest. That’s just how the math works. 80% is a lot.

So keep getting those small wins. It will all add up in the end.

#11: They’re not running tests at all times

Every single day without a test is  a wasted day. Testing is learning. Learning about your audience, learning what works and why. All the insight you get can be used in all of your marketing, like PPC ads and what not.

You don’t know what works until you test it. Tests need time and traffic (lots of it).

Having one test up and running at all times doesn’t mean you should put up garbage tests. Absolutely not. You still need to do proper research, have a proper hypothesis and so on.

Have a test going all the time. Learn how to create winning A/B testing plans. Never stop optimizing.

#12: Not being aware of validity threats

Just because you have a decent sample size, confidence level and test duration doesn’t mean that your test results were actually valid. There are several threats to the validity of your test.

Instrumentation effect

This is the most common issue. It’s when something happens with the testing tools (or instruments) that causes flawed data in the test.

It’s often due to wrong code implementation on the website, and will skew all of the results. You’ve got to really watch for this. When you set up a test, watch it like a hawk. Observe that every single goal and metric that you track is being recorded. If some metric is not sending data (e.g. add to cart click data), stop the test, find and fix the problem, and start over by resetting the data.

History effect

Something happens in the outside world that causes flawed data in the test. This could be a scandal about your business or an executive working there, it could be a special holiday season (Christmas, Mother’s Day etc), maybe there’s media story that gets people biased against a variation in your test, whatever. Pay attention to what is happening in the external world.

Selection effect

This occurs when we wrongly assume some portion of the traffic represents the totality of the traffic. Example: you send promotional traffic from your email list to a page that you’re running a test on. People who subscribe to your list like you way more than your average visitor. So now you optimize the page (e.g. landing page, product page etc) to work with your loyal traffic, thinking they represent the total traffic. But that’s rarely the case!

Broken code effect

One of the variations has bugs that causes flawed data in the test. You create a treatment, and make it live! However, it doesn’t win or no difference. What you don’t know is that your treatment displayed poorly on some browsers and/or devices. Whenever you create a new treatment or two, make sure you conduct quality assurance testing on them to make sure they display properly in all browsers and devices.


Today there are so many great tools available that make testing easy, but they don’t do the thinking for you. I understand Statistics was not your favorite subject in college, but time to brush up. Learn from these 12 mistakes, so you can avoid them, and start making real progress with testing.