All Campaign Lift Metrics Are Not Created Equal. Here’s Why.

In today’s data-rich advertising landscape, marketers benefit from access to robust campaign analytics to make informed decisions on marketing mix and advertising spend. With endless advertising choices, marketers try to make an apples to apples comparison between key campaign metrics like “campaign lift” to compare competing advertising vehicles. Unfortunately, differences between the way that lift is measured by your measurement provider result in an inability to conduct a direct comparison, making it increasingly difficult to understand which advertising vehicle truly has a better ROI.

In the indoor mobile location industry, “campaign lift” generally represents the difference in some real-world action (think store or department visitation) between a group of “test” users that were exposed to advertising compared to a “control” group that were not exposed to this advertising.

The largest issue that marketers encounter with lift metrics is the manner that providers define the “control” group. In order for the “lift” metric to be reliable, your control group must look the same as your test group, with the only major difference being whether or not the group was exposed to the advertising campaign. Unfortunately, this is where many measurement providers come up short, resulting in lift numbers that can’t be trusted.

Luckily, with indoor mobile location, you can easily pull your control group from people who look very similar to your test group. For example, with Swirl’s Impact Analysis tool, when testing mall to store conversion rate, the control group of unexposed consumers will also be mall shoppers who visit the mall during the same time period as your test group. In this instance, the visitation rate of the control group will serve as the baseline visitation rate and therefore the “lift” metric accurately represents the difference in visitation for shoppers served a specific advertising message.

In addition to ensuring a similarity between the test and control groups, Swirl’s platform also empowers advertisers to customize the statistical settings, including whether the advertiser wants to run a one-tailed or two-tailed test, as well as defining a confidence level, which refers to the percentage of all possible samples that can be expected to include the true population parameter. If you’re a statistics pro, there are a number of additional customization options within the Swirl platform to ensure confidence with the final results including:

Desired Increase Detection: The minimum absolute change in conversion rate you would like to be able to detect

Power: The probability that the test will reject a false null hypothesis

Control Group Sample: The percentage used to determine the split between control and test group samples from the total sample

To ensure that your measurement provider is accurately creating a control group, marketers should ask them if the control group matches the demographic , geographic and device location makeup of the test market. And, if you’re interested in learning more about Swirl’s statistical analysis tools, contact us.


Want to see the Swirl platform in action?