Issue 17 - Creative Testing Showdown

On structuring ad accounts and unmasking Google

Few things are more frustrating than being a DTC media buyer. 

Marketing platforms change and evolve constantly. From “what’s working right now” to navigating an always morphing user interface, the experience is one of constant learning and re-learning. The “decision engine” for all of the major ad networks is cloaked in secrecy and forever tweaked behind the curtain. Your ad account can crush one week and tank the next, even without any input from you at all. 

In this edition we take a look at the big two whales - Meta and Google. We summarize the ongoing debate on how to structure your Meta add account, take a look at how Google is trying to stay relevant, and discuss the implications the recent Google Leak.

Let’s get into it…

To Test Or Not To Test

Andrew Faris has a unique approach to creative testing. 

He throws new creative into his evergreen campaigns using a cost-capped CBO structure and wishes them well. 

In other words, he pits them against his best ads and lets the algorithm decide which ones get spend. 

  • Evergreens: consistently top-performing ads.

  • Campaign Budget Optimization (CBO): a hands-off method that lets the algorithm allocate spend to top performers. 

  • Bid caps: limits spend per bid, meaning your ad will appear less frequently, in slots or times that are less competitive, and possibly to less engaged audiences. 

Here’s why he does this. 

  1. He dislikes creative testing campaigns. 

First, because: “most new ads don’t work”, he says. Campaign performance will be below average, and that’s money lost. 

Next, because they are too small a sample size to draw any conclusions from. 

Conclusion? They are a waste of money

  1. The algorithm is smarter than you. 

Let it choose the winners.

  1. Letting the algorithm lower spend on losers allows you to test much more creative at a lesser cost. 

His closing statement:

Jess from FireTeam strongly disagrees and responded with a detailed thread. Here are his major points:

  1. You should NOT force direct competition between ads, especially with cost caps leveling the playing field.

Why? 

Because top-funnel audiences are harder to convert than bottom-funnel audiences. Putting everything under the same cap results in spend going to bottom-funnel ads only. 

“Trying to pit higher funnel creative against lower funnel creative is insane and unhealthy.”

Jess, FireTeam

Jess says you WANT to let a top funnel ad find its audience (by letting it bid more). 

Sure, it will be less profitable, but it will absolutely add to your contribution margin and bottom line.

  1. Creative testing campaigns do not perform THAT poorly. 

Or at least, they shouldn’t. If yours do, find a new creative partner who is better at understanding things like offer, angle, hook, and development processes.

However, even if they do perform 10-12% worse, he considers the learnings worth it. 

  1. What are the learnings? 

Simply which ads perform when given a chance. 

When you input this data into an iteration cycle of posting the ad - gathering data - and iterating on performers, you get a robust output of strong performers, and insights into what resonates with your various personas. 

  1. When you give all ads a chance, you get to find the single-base, double-base and home-run ads. 

If you run Andrew’s structure, all you get are the home-run ads.

A high enough percentage of single-bases can get you home too though, says Jess. 

  1. Beware the algorithm. 

It is better than we’ll ever be… 

But over a 7-day attribution window. 

Aside from that, it is blind to the broader picture of your strategy or the market’s fluctuations. 

Jess’ point: if you want to grow as a brand, you should be thinking and testing out ideas to evolve your messaging and find new audiences. And the algorithm can’t do it for you. 

So, whose side are you on?

Regardless of which tactic you choose to go with, Taylor Holiday from the Common Thread Collective says there’s one underlying key to success in Meta ads: pumping out more creatives. 

Takeaway: Andrew’s cost-capped CBO structure allows you to identify home run ads, fast. You won’t need a crazy iteration cycle to stay competitive, and anything below great will get little to no spend. 

Alternatively, creative testing will help you find ok, good, and great ads. If 90% of your ads are ok and above, you should come out ahead. If you’ve got a crack team with tight iteration cycles, even better. 

Did Google just replace branded search? 

Google’s Marketing Live Keynote revealed a number of new ad innovations, rife with AI. 

These include:

  • Conversational Google Ads Interface: Streamlines campaign setup through a new, easy-to-use conversational UI.

  • Automatically Created Assets: Uses AI to generate ads automatically, pulling in relevant keywords, headlines, and images from your content.

  • Performance Max AI Integration: Enhances campaigns with AI-generated custom assets to boost conversion rates significantly.

  • Search Generative Experience (SGE) Ad Formats: Introduces dynamic AI-driven ad placements within Google's search environments, adapting to user queries.

  • Product Studio for Custom Images: Allows for easy creation of custom product imagery using AI, including scene generation and image enhancement features.

But the most interesting update might be the least technologically advanced. 

Introduciiiiiiiiing… Brand profiles. 

It's a large, top-of-page space dedicated to your brand if its name is in the search.

- Search Engine Land

Why it matters

According to Google, over 40% of shopping queries on search mention a brand name. 

Branded search is prime real estate. 

However, bidding for your own branded search terms is considered a less-than-optimal strategy since this traffic is likely to reach you organically. 

Taylor Holiday and others like Peter Quadrel from Odylic Media have noticed that Google’s PMAX algorithm invests more than it should into branded search (PMAX is Google’s hands-off option that automatically optimizes ad placements and bidding across all Google platforms).  

Not ideal. 

However, branded profiles, on paper, make branded search investments obsolete. PMAX should therefore stop doing so.

However…

A few things to point out here. 

First, branded profiles constitute a wild left turn from Google’s prior philosophy, which allowed competing brands to dominate search result spots for your brand's name.

As Lucian Armasu from AdsLux points out: 

Second, it isn’t clear where prior branded search ads will go. Are they just going to be queued below? Either way, they just became way less important. 

Oh, and what are the chances they make the brand profile section competitive? 

We’ll have to wait and see. 

Takeaway: branded profiles look like the new, top-of-page mainstay feature. Very little was shared regarding what this meant for branded search, though it suggests that it has been rendered quasi-obsolete. If true, it might fix PMAX, which invests too much into branded search. 

The Google Leak

The man above is Rand Fishkin, the co-founder of sparktoro.com and snackbarstudio.com, and an SEO expert who’s been battling Google for some time. 

Battling over what? 

Over what Google’s algorithm tracks, and how it decides what ranks.

While at Moz, which Fishkin co-founded, he developed the concept of domain authority, a metric that predicts a website's ability to rank on search engine results pages (SERPs). Google denied using any such metric, but domain authority became widely adopted by the SEO community. 

Next, Fishkin conducted various experiments, the results of which suggested that Google uses click-through rates (CTR) and user engagement as ranking factors. Once again, Google denied the “allegation”. 

Lastly, Fishkin suspected the existence of a Google “sandbox” effect where new websites are temporarily prevented from ranking well in search results. Guess what Google said?

Here are some of the statements from Google representatives over the years denying Fishkin’s theories and even calling his takes “made-up crap”.

On May 5th, Fishking received an email from a person claiming to have access to a massive leak of API documentation from Google Search division. Authenticity was verified with the help of ex-Google employees. 

Turns out… Fishkin was right about the CTR and the sandbox. And while Google doesn’t use Moz’s exact domain authority metric, it has a similar one. 

Mike King, a renowned technical SEO and founder of iPullRank, picked the documentation apart and broke it down in and in-depth blog post if you want a closer look. 

Briefly:

  1. There is no detail about Google’s scoring functions in the documentation.

  2. There is a wealth of information about data and features manipulated and stored by Google to run its search engine. 

Takeaway: Google has apparently been less than truthful for years. And while we can empathize with them protecting proprietary information, their efforts to discredit people like Fishkin are condemnable. 

Both this leak and information revealed during Google’s DOJ trial have shown that Google consistently attempts to veil the truth, which makes it hard to trust its future statements. 

Quick hits

Reply

or to participate.