The Anvil

A place to hammer out ideas

Tips for reducing the width of the predictive interval

October 7, 2021

In my last post I talk about how you can use a forecast to measure the impact of a some kind of change (e.g. Google rewriting your title tags in the SERP). One of the main difficulties you might find with this is that the width of the predictive interval is too wide to conclude that a change has had a positive/negative effect with XX% probability.

In this case there we expect there to be a positive difference overall, but we cannot be 95% certain that there has been a positive change

What do you do if your best guess is that the test has a positive outcome but you aren’t 95% sure? Will Critchlow has written about this kind of issue which they sometimes see at Search Pilot. Will has a set of recommendations they use in this situation based around the mantra “we’re doing business, not science” which is an important rule to remember (unless you are a scientist!).


Read more

Measuring the impact of Google's Title Rewrite Change

September 29, 2021

Background

On 24th August 2021 Google announced a change in how page title’s would be displayed in search results. The new system means that Google’s machine learning systems will more often display a custom page title in the links on a search engine results page; before there was less rewriting being done and the contents of the <title> tag was more likely to be used.

Barry Schwartz has written a good summary of the ins and outs of this over at Search Engine Land.

As with any algorithm change, Google are saying that this improves results overall but there are some people for whom the new titles seem to be performing significantly worse.

If something is a lot worse or better than it was previously then you don’t need fancy statistics to be able to see that.

If whatever you are looking at has the same impact as the Moderna covid vaccine then plotting the right chart should be more than enough to convince anyone

Read more

How good will my forecast be?

June 28, 2021

One of the challenging things of using a machine learning system like Forecast Forge is learning how much you can trust the results. Obviously the system isn’t 100% infallible so it is important to have an idea of where it can go wrong and how badly it can go wrong when you are talking through a forecast with your boss or clients.

One ways that people familiarise themselves with Forecast Forge is to run a few backtests. A backtest is where you make a forecast for something that has already happened and then you can compare the forecast against the actual values.

For example you might make a forecast for the first six months of 2021 using data from 2020 and earlier. Then you can compare what the forecast said would happen against the real data.

         Use data from this period      To predict here
  ________________________________________~~~~~~~~~~~~~
  |------------|------------|------------|------------|--------????|?????????
2016         2017         2018         2019         2020       Then use the same
                                                               methodology here

Sometimes when you do this you will see the forecast do something dumb and you will think “stupid forecast. That obviously can’t be right because we launched the new range then” or something like that - the reason will probably be very specific to your industry or website. If you can convert your reasoning here into a regressor column then you can use this to make the forecast better.

Backtesting several different forecasting methods with different transforms and regressors

Read more

Follow up thoughts after #PPCchat

April 15, 2021

On Tuesday I was kindly invited to discuss PPC Forecasting as part of the regular #PPCchat Twitter chat. There is a recap put up on the official website.

#PPCchat started just over 10 years ago and, whilst I don’t think I was involved in the very first one I joined the conversation as soon as it was moved to a more convenient time for the UK timezone. It was a very important part of my week for a number of years but I’ve drifted away from the community as my work moved away from hands on PPC management and more towards digital analytics and data science. I was very flattered to be invited to be a guest on the chat and very happy to be able to help a community that has helped me a lot.

The twitter chat format felt a bit frantic so I thought I’d expand on some of the ideas and questions raised here.

The Golden Rules

Before getting into anything too complicated here are what I think the two most important things you can do to get better at forecasting are. They have nothing to do with machine learning or fancy techniques or anything like that; you should be able to apply them regardless of your current process.

  1. Actually care about being right
  2. Keep score; check how good your forecasts are against what actually happened and then try to figure out where you went wrong

If you do only these two things then your forecasting will start to improve. And you don’t even need to pay for a Forecast Forge subscription to do them!

Different types of forecast

To help clarify things and avoid talking at cross-purposes (always a risk on Twitter) I split forecasting up into three overlapping areas:

  1. Forecasting something you’ve never done before (e.g. “we’ve never advertised on LinkedIn; how much will we get from that?”). This is hard and really relies on a lot of hands on experience and marketing expertise for it to work well. Machine Learning of the type that Forecast Forge does isn’t very helpful here - an ML approach that might work is taking data from a lot of people who have done the thing and then trying to figure it out from there
  2. Forecasting doing more (or less) of something you’ve done a bit of in the past. The big one here is “what if we increased/decreased the budget?”" but it might also be things like turning on retargeting, the site going into sale or above the line ad campaigns (e.g. TV). Forecast Forge can help quite a lot with this kind of thing
  3. Estimating trends and changes in everything else - e.g. what is the seasonality for CPC and conversion rate? Is overall search volume in this niche going up or down? Forecast Forge can help here too.

Type one forecasts, where you forecast the impact of something new, are a very interesting challenge; the way to approach them from a Machine Learning angle is to collect data from other people who have done the new thing and then try to figure out which of the other people your client is most similar too. This is basically what people do too when they draw on their experience to make an estimate. Forecast Forge doesn’t store your data, know anything about the sector your business is in, or know exactly what metrics you are forecasting so this is not something it supports (there are a few ways you could “hack” this - ask me if you want to test them).

Forecasting doing more or less of something that you’ve already been doing is really important for paid media; this is how you estimate the impact of increasing or decreasing your budgets which is a super-important and frequently demanded forecast for everyone.


Read more

How often should you forecast?

March 22, 2021

How often should you be updating forecasts or making new ones? The answer depends on what you mean by “forecast” and can range from “as often as possible” through to something much less frequent.

When you use machine learning to make a forecast there are three parts to it:

  1. The model
  2. The parameters
  3. The data

Making changes to any these could be called “forecasting”.

The three categories are a little bit fuzzy. For example it isn’t totally clear what the boundary is between parameters and model but the basic idea is that you can make very frequent updates for things near the bottom of the list and should be a bit more cautious with things at the top of the list.

This might be easier to understand with an example:

An easy way to make a forecast that includes weekly seasonality is to make the forecast for the next day a weighted sum of the previous seven days. By giving more weight to what happened seven days ago you will see a weekly pattern in the forecast.

The prediction for each day is the weighted sum of the previous seven days

Read more

Get updates in your inbox