Ad Testing: Research & Findings
[I’m feeling a bit out of sync in this session at the moment. I’m seated way on the right instead of front and center like I’m used to. It was a strategic decision on my part. With six sessions happening today and only 15 minutes between some, I need to conserve battery power. The power outlets are on the right. I’m smarter than I look, people.]
Jumping right in, we have Andrew Goodman (Page Zero Media), Bill Barnes (Enquiro Search Solutions) and Anton Konikoff (Acronym Media) speaking.
Up first is Anton Konikoff. He’s mesmerizing in a Dracula kind of way. I can’t take my eyes off of him. Anton mentions that Mike Grehan is now part of their crew. I got that memo.
Why Engage In Ad Testing?
- Get quantifiable insight in a controlled environment. The right ad testing methods allow you to get true insights that allow you to make well-informed judgments.
- Helps you understand messaging effectiveness.
- Identify the most attractive product offerings
- Create the optimal search campaign that maximize ROI/ROAS
- Identify learnings that can be applied across other media channels.
The Scientific Approach: You isolate your variable. Only test one thing at a time. Keep bids/ad serving equal and run a control group. Doing this helps you get meaningful results, assuming you limit the number of ads being served and collect a large enough sample size.
Reality Check: There is the theory of ad testing and then there’s how it works in the real world. If something doesn’t work, you hurt your Google Quality Score, your competition ends up running ahead and you have a limited budget. Not all messages work with all keywords. In theory, you should be testing everything. In reality, we don’t have time for that.
You have to test smarter. With a smart approach under real world conditions, we can make educated inferences. Start by testing either Titles or Descriptions. Depending on your ad budget, test 2-5 ad versions. Use significantly different messages/offers, using themes. If you’re testing in Google, turn ad serving to "rotate" (not Optimize) for more-even distribution.
Running The Test:
While testing, manage bids normally. Don’t run it as a "test". You’re interested in real-world performance. Once you have enough data, start looking for trends among groups of similar keywords. Make sure you’re focusing on the most important metric – the highest CTR and highest conversion rate may come from different ads.
Anton gives us a quick case study from the Four Seasons. They have 72 properties around the world. They decided to test three different types of ads. The first ad they ran focused on the brand, the second focused on the price point and the third was informational. They purpose was to get a better understanding of user response across different geographical regions. The same ads and keywords were tested by continental regions.
Findings: Each continent they tested had a unique conversion results for ad copy. In Europe, the brand theme worked best. In Asia, it was the price points and in North America, it was the informational.
Round Two: They selected the top performing ad and started testing the next element of the ad. If you texted descriptions first, test titles next. Pause all of your old ads and start fresh to reset the Quality Score.
- High CTR may not be good! Ask if you’re qualifying your traffic enough. Is your ad misleading?
- Are you getting enough clicks to make accurate judgments?
- Do all versions work as well with your landing pages? How do different ads affect Bounce Rates?
By now you’ve got a fairly good picture of what works. That’s no reason to get complacent. Start looking at other components like punctuation, display URLs, proper case vs sentence case, accent usage, dynamic keyword insertion, dayparting, banner & email creative, and engines beyond Google.
His motto is test, test, test, then invest. Heh.
Next up is Bill Barnes.
Search is the connection between Intent and Content. What is the person thinking when they sit down at a computer and launch a search? If you can match their intent with your content, you win.
With Search, intent is key. You want to get inside your customer’s mind. This includes doing research and creating personas. Make your research human, not academic.
Bill shows us how a user interacts with search results pages. There’s a little yellow blob floating around the screen. I’m having Ghostwriter flashbacks. [sings: GhostwritER!] He talks about the Golden Triangle and I zone out for a minute.
How Intent Impacts Searching:
Scenario 1: Bill has a brother-in-law. He doesn’t like him but he has to buy him a birthday present to keep family bliss. He knows he likes John Irving. His brother in law lives out of town and Bill plans to spend $20-25 on him. He’s going to have the gift shipped so he doesn’t have to see him. Bill doesn’t like his brother in law. He says that about 8 more times.
He looks for a John Irving title. He scans the page until he finds a scent that matches his intent.
Scenario 2: Same purchase but he’s searching for himself, not his brother in law who he doesn’t like. He’s looking for reviews. Again, he scans to match his intent scent.
Scenario 3: Your friends came back from Vegas, now you’re planning a trip and you’re looking for the cheapest rate at the Bellagio. You find the scent that matches your intent – lowest price guaranteed.
Scenario 4: Your friends just got back from the Bellagio and now you’re going to Vegas. Your friends raved about it but you don’t trust them. You do your own search looking for information on Vegas hotels.
Intent should impact scanning behavior.
For Research type queries (the second version of each example), we would tend to "think slice" sponsored content out of the way.
For Purchase type queries, we would tend to focus more on sponsored content. This should mean an increased tendency to skip over top sponsored content.
They did a test to see how people responded to the Bellagio examples. Both Researchers and Purchases focused on the same listings. The scannings were almost identical. 80 percent of the people scanned the first few words of the title on 3-4 of the first listings. They both paid a lot of attention to the first result.
Interestingly, the results proved that purchasers spent 3x longer on the results page — 23 seconds on average. Researchers spent a greater percentage of time in the sponsored listings than the purchasers did. However, 100 percent of the clicks were in the organic listings.
This means researchers are looking at Sponsored Listings but are not finding what they want. If you know researchers are looking for your site, start tweaking that ad copy to target them. Don’t try and sell, inform.
An experiment with personalized results:
They asked search marketers what search was going to do and where it was going in 2010. They talked a lot about Universal Search and personalization. To prove that personalized search would work in 2010, they put together a study. They took a group of subjects and followed them around watching the kind of search activity they would perform – what sites did they go to, did they go to social sites, did they watch video, etc. After they got their results, they mocked up a page with personalized elements. They took positions 3-5 and planted personalized results in there. The personalized results drew the eye a lot more than they were expecting. He thinks this is proof that personalized search will work. The percentage of fixation time doubled on personalized results and they got 3x the amount of clicks.
You don’t have to wait for the results to be personalized. When you have your personas built, you can write copy that will specifically appeal to them. Really drill into your personas.
Semantic Mapping In Search
He told people to search for "digital camera". They asked people what they were thinking the moment before they searched. Got lots of brand names, some people were looking for reviews, features etc. Even though they’re thinking about different stuff, they still all typed in the same query. They noticed how they moved down the page and which words they fixated on. They found that most people focused on the semantically mapped terms-the stuff that was dancing around in their head. Know what your customers are thinking about and put those words in your ad copy.
Brand fixations occurred in the URL and title of the listing; not in the description. Place your brand in the title, URL and as close to the start of the description as possible in your sponsored and organic listing. Subjects with established affinity for the brand spent 25 percent less time on the Top Sponsored listings, jumping down to the organic listings 73 percent faster than the non-affinity group. Sponsored listings appear to have a greater opportunity to lift brand affinity among new customers. Write content to them, not existing customers.
- Understand intent
- Get inside your customer’s mind
- Understand the importance of the area of greatest promise and the consideration set
- Determine whether you’re targeting a researcher or a purchaser
- Test personalized ad copy
- Don’t base understanding on just queries
- Test brand messaging
Last up is Andrew Goodman. He says he’s going to ask us a lot of questions. [pleasedon’tlookatmepleasedon’tlookatme]
Q: Should you test ads towards ROI or CTR?
Q: Headline: Dynamic Keyword Insertion or not?
A: It works very different with short and long keyword lists. Obviously you should test. Isn’t often great for CTR, not so great for ROI. It can be the best option to begin – until you refine and find something superior for long term ROI.
Q: Headline: Clever or Plain?
Q: Call to Action in Ad: Yes or No?
A: You should always test multiple offers and calls to action – especially at the refinement stage. On its own, who knows? But with a lot of testing, the "ultimate" ad will often include one. Check it out: Our brand can be a call-to-action and allow you to leave that in the display URL. Or reinforce it by putting it in the headline and in the display URL without having to use up body characters.
Q: Use Punctuation: Yes or No?
A: Totally typical of something that is context-sensitive and requires testing. B2B buyers might like it. Conversely, B2B buyers might be equally amendable to retail psychology or eye tricks. User’s eyes pick up on subtle things. All of a sudden, a "buy now!" call to action seems too salesy to some. You need to reintroduce new tests periodically.
Q: Display URL: Keyword in Subdirectory or Not?
A: Tends to win. They’re eye-grabbing. Additional relevancy cue for users. Seems "navigational".
Q: The display URL looks like a destination URL or no?
A: He’s seen it work. Why? User confidence? User persona: Slightly gullible, likes "real" search, hates "ads" but in reality they’re not really that picky.
Conclusion: Don’t listen to his opinion. Data can be complex. He doesn’t know your business model and not all parts of the account behave like other parts.
Stage 1: Rapid Discovery
Look for Hot buttons: Be motivated by "skinny persona research". Ask who. Consider what drives them. Try certain incentives, offers and calls to action. Price, shipping, style, a persona, or business crisis.
On very granular campaigns, be considering what you can or can’t extrapolate to other ad groups: Headlines, styles, calls to actions, benefits, shipping offers, testimonials, etc.
Stage 1A: Beware of Statistical Noise and Context Sensitive Tests
Stage 1B: Statistical relevant of tests by ad position
Stage 2: Multivariate Testing
After you’ve done your initial quick improvements, you have to go into formal MT testing.
Tight Targeting Bias in a Quality Score World
Best practices account-wide will "half write the ads for you".
Keywords, Ads, Landing Pages connection: Poor relevancy, loose targeting will make testing beside the point.
High CTR Bias in Paid Search
Going granular is part of the battle. But what about broader parts of the account? You don’t always know the intent. Valuable prospects with different intent can be typing the same terms.