Saturday 21 Sep, 2019

Starting at 9am

Etc Venues

One Drummond Gate, Victoria, London SW1V 2QQ
Menu

I used to think I was indecisive but now I’m not so sure…

Or: why it’s good to be able to change your mind in digital analytics

Campaign attribution is a surprisingly complex area. The basic concept seems pretty straightforward: work out which of my marketing campaigns are driving conversions, so I can do a cost-benefit analysis of my promotional spend. I can run a simple off-the-shelf “last click attribution” report and get an answer. But is it the right answer? What about first click? Or weighted click..? The more you think about the problem, the more complex it becomes.

I’ve been helping out on a campaign attribution project for one of our iJento clients that had an additional source of complexity. Our client sells an expensive, high consideration product. Visitors typically carry out online research, including generating a personalised quote, and then may purchase either online or offline. So we had several different scenarios to consider:

  1. A person purchases online. That’s valuable.
  2. A person gets a quote online without purchasing. Less valuable, but still good – they may convert offline.
  3. A person gets a quote online, and then (later) purchases online. That has exactly the SAME value as the person who purchases without a quote.

So in other words, our attribution model needs to ignore quotes when they are followed by purchases, but assign value to quotes that are NOT followed by a purchase.

It gets more subtle: a visitor who gets TWO quotes is actually LESS valuable than a visitor who only gets one quote – because a visitor like that is likely to be comparison shopping and so less likely to convert.

It was a fun intellectual challenge to build all of this complexity into a good attribution model. But when I reflected on the work we’d done, one practical aspect struck me: we kept changing our minds about things.

Perhaps I should put that a different way: we took an iterative approach to the attribution model. We started with some scoping questions: how long are buying cycles? (It turned out: some people buy very quickly, and other people spend a LONG time researching. This was important! A simple “average length of buying cycle” would have been very misleading.) How many quotes do people get before they purchase? (Some people get multiple quotes, and they behave differently from people who get single quotes.)

Then we moved on to some trial-and-error about the attribution model itself: armed with the answers to our scoping questions, we experimented with different business logic about what value to assign to each individual visitor.

At each step we were looking at the data, running some analysis and learning a bit more about how to build our attribution model. We started with some simple business logic for attribution and we progressively refined it, each time using the data as a guide. The end result is a robust model, fine-tuned to this particular client’s business, that gives a far more accurate picture of campaign cost-benefit than any off-the-shelf attribution model could ever have done.

I didn’t keep count, but at a guess we went through 15 or 20 iterations in this process, each time trying out different cuts of the data or different business logic for attribution. The start point was mediocre. Each iteration gave us a better model. The end result was awesome.

The moral of the story: it’s good to change your mind.

We were only able to do this project because we had the right tools. One of the compelling advantages of a data warehouse over the standard digital analytics reporting tools is that it supports exactly this type of iterative analysis. If we’d been working with an aggregate reporting system, many of the iterations of our model would have required tag changes. It would have taken years to iterate the model in the way we did – each change to business logic needs a tag change, each tag change needs a few months to collect enough data… Not practical. With our data warehouse, we were able to reanalyse historical data at will. So our 15-20 iterations took about a week in total. (They could have gone faster if our brains were faster! But sometimes you need some thinking time between iterations.)

So perhaps I should say: it’s good to be able to change your mind. With the right tools, iterative analysis is a breeze, and you can’t beat the feeling of an awesome final result.

iJento at MeasureCamp

A few words on MeasureCamp to wrap up…

I’m really excited about the grass-roots digital analytics community in London, and proud that iJento is sponsoring MeasureCamp. I’ll be at the event all day with some of my iJento colleagues. Please come to our stand and say hello.

Also, Chiin Tan from iJento client FT.com is speaking with me about multichannel analytics – we have some really interesting stories to share, do come along.

Finally, we’ll be glad to buy you all a drink at the after party (unless we change our minds 😉 ).

Author: John Woods
Position: Founder & CTO, iJento

iJento is a MeasureCamp Gold Sponsor. As John has said, come and say hello at MeasureCamp (or the after party) or they can be found at http://ijento.com/.