We love a good test

At fed we love a good test.

Here are five insights that could help your direct response testing.

1. Ensure it’s worth testing 

Focus on tests you think will have an impact and avoid those that don’t have a credible hypothesis.

Don\’t waste your time on tests that you know won’t be worth the time and effort – and more importantly won’t give you the clear insights you want.

Here are some tests that have proven their worth for us 

  • Ask levels (for example, do I ask for highest gift multiplied by 1 or 1.25)
  • Inclusion of premiums in your mail pack (how much is it worth spending, do I get a net return)
  • Some creative tests (For example the inclusion or not of certain types of lifts. We\’ve seen some lifts, with graphic imagery, suppress response).
  • Single versus multiple asks (do I just ask for cash, or regular giving too)
  • Outer envelopes (i.e. plain, colored, branded)

The above are just a few test ideas that we would focus on and have seen the most significant results in terms of clear winners for our clients.

What tests are not worth it? 

Things that logically don\’t make much sense.

Think about it. Would someone really be compelled to give or give more just because the CEO’s signature is in blue pen as opposed to a black pen? Messing around with fiddly little tests (that in the big scheme of things probably won’t tell you much) is probably not worth your precious time.

Also consider the effort required in conducting the test. For example, an ask test is very little effort, but creative tests require a chunk of work including extra costs for developing additional lifts etc. which often aren’t accounted for.

2. When should you test?

For most of the direct mail campaigns we manage – we test – but only if we have a test that we expect to give us learnings which we can apply in the future.

There have been some instances where we don’t test and this might be because the campaign we are managing is already full to the brim with multiple tactical elements, therefore including a test might tip the campaign over the edge.

Testing is extremely worthwhile when you are in the infancy of a direct mail or fundraising program, as it will direct you into shaping the way for future campaigns. It’s also useful if you’re attempting to convince your boss, your board and sometimes even your donors that what to them might seem counter intuitive is making you more money (yes, those longer letters do work).

3. What if my program is too small to conduct controlled tests?

Never fear. We know that not everyone has a direct mail file that would be considered big enough to conduct testing (*big enough depends on the type of test, but typically a test cell needs around 5,000 donors) so if you fall into that category – here’s what we would do:

  • Learn from what others have done 
    • Ask your fundraising friends that perhaps have done the testing in the past (and whom you trust) – because what they have tested will most likely work for you too (no, your donors really aren’t any different if you live on the west coast, or because you help animals and not kids).
  • Soak up testing results from fundraising blogs
    • There’s lots of good fundraising wisdom out there, and if someone is prepared to put their name it, then it’s likely believable. We also share testing insights in the fedX series so be sure to keep an eye out for these in the future.
  • However still question the results 
    • Knowing when to question v’s accepting is key. If you attend a conference and someone bangs on about how a tick box on the response mechanism increased the number of bequest prospects – ask yourself – was it tested? Is there a better way to gain new bequest prospects or confirmed bequestors without sacrificing cash if you don\’t need to? Talk to the source or chat to others that have a successful bequest campaign and find out how they are doing it.
  1. Apply logic 

 

\"\"

It’s all well and good to conduct a great test and look at the results that shows a higher gift amount increased response by 10%, go on with your day and use that ask strategy in future campaigns. Really? Why would asking for more increase the response? Is that what the test was intended to do?

Always question the numbers. Apply logic and common sense. Here’s what we do to ensure we get the right insights:

Check the numbers

  • Was the data split 50/50 or 42/58? Did some late suppressions pollute the splits, or was the data simply not evenly split? This could be the reason the results are so different.

When splitting data for a test it is important to randomly split equally across the groups that influence the outcome. For warm appeal mailings, that will be recency, frequency and value. For acquisition mailings, it will be the data source/list and other variables such as state/gender.

Check the splits 

  • Did you lump all your ‘best’ donors into the test group and the rest into the control group? Ensure your donors are split evenly across all segments.
  • Also, there should be a good reason why you conduct test that doesn’t have a 50/50 split. For example, if it’s a high risk test and you want to minimise risk. The other is if you have previously tested and it seemed to work, however you want to back test to make sure.
  1. What does this all mean?

\"\"

What happens when your results are not statically significant? 

Sigh.

You might be happy to know that many of our tests are not statistically significant. Don’t let it get you down though. You can still learn a lot from the test. This is where you use your intuition (e.g. does it make sense?) and chat with colleagues if they have also had the same results.

For all tests we aim to deliver a result that is 95% statically significant. Here is a handy statistical significance calculator you can use: https://www.evanmiller.org/ab-testing/Please note you will need to upload the raw data as it uses the distribution of the gifts to make the calculation.

What happens if the response rate and average gift is vastly different? 

This mostly happens in ask strategy testing – therefore we need to look at the net income per donor and see if there are any significant differences there.

What happens if the test didn\’t make a difference?

  • Sometimes finding out the test didn’t harm the campaign or make a difference is as good as improving the results
  • You can always dig a little deeper. Overall it may have not made a difference but it did make a difference to your active cash group or high value donors? You can always break it down a little to see if there are some hidden gems below the surface or also review the long-term impact of each group in future campaigns.

You can also test more than one thing at a time and even conduct long-term tests (over multiple campaigns) to see if the results stack up overtime.

Don\’t just take it from me though.

Take it from my colleague Andy Tidy. If you missed his post about Simpson’s Paradox and the importance of questioning test results – here it is for you to enjoy.

Happy testing!

Scroll to Top