Subsidy Costs in PPC Marketing
Drilling Down
Newsletter
# 64: 2/2006
Drilling Down - Turning Customer
Data into Profits with a Spreadsheet
*************************
Customer Valuation, Retention,
Loyalty, Defection
Get the Drilling Down Book!
http://www.booklocker.com/jimnovo
Prior Newsletters:
http://www.jimnovo.com/newsletters.htm
========================
In This Issue:
# Topics Overview
# Best Customer Retention Articles
# Subsidy Costs in Pay-per-Click Marketing
--------------------
Topics Overview
Hi again folks, Jim Novo here.
Are you doing any PPC advertising for search phrases where you also
have a high Organic ranking in the search engine for the same
phrase? If so, you might be able to save yourself a ton of money
using the simple test described in this newsletter. We've also got a couple of great customer marketing links,
one on the art of customer segmentation and one on the
analytically-driven enterprise.
Let's get at that Drillin'!
Best Customer Marketing Articles
====================
*** Build
Profit With Customer Segmentation
February 12, 2006 DM News
Arthur Middleton Hughes, a legend in the DM business, serves up an extremely
complete description of all that should go into a proper customer segmentation
plan. Need some help with defining those segments? To get the most
actionable results, do not start with product and / or demographic
characteristics; start
with segmenting by behavior.
**** Competing
on Analytics
February 20, 2006 Optimize Magazine
Now you're talking. This article blends some of the thinking in my
"Six Sigma Everything" presentation to the 2005
eMetrics Summit and my article Creating
Analytical Cultures, but from a different perspective. To the traditional list of
business differentiators - product innovation, customer intimacy and operational
excellence - can we add "analytics", meaning superior knowledge about
customers and processes? Some in the web analytics community, for example here
and here,
seem to disagree with the core idea of "enterprise analytics". Well,
I've seen this movie before, and I can tell the difference between
"reporting" and "analysis". The first is non-rigorous
and is built solidly around the principle of CYA. The second yields real
intelligence on a macro scale.
Questions from Fellow Drillers
=====================
Subsidy Costs in Pay-Per-Click Marketing
Q: Hey, Jim-
I'd like to know HOW the effectiveness of PPC ads can be tested if your site is already #1 or #2 in the organic search rankings.
Let's say I discover that if I spend nothing on PPC, people still see my site on all their search results -- yes, below the three sponsored listings -- but high above the fold nonetheless.
By opting to go organic-only, sure, you're guaranteed to lose a few clicks.
BUT, if my PPC + Organic -> Organic-Only click drop-off is only 5% or so, can I really justify a $500,000 PPC campaign for my client if I know that number of lost clicks is worth only $25,000?
I guess what I'm saying is that the only results from a PPC campaign that matter are the
incremental clicks that paid search provides above and beyond organic search.
And if those incremental results aren't significantly higher, it's a lot harder for me to justify the spend... 'ego spending' of course aside.
A: Yes, ego spending and the "best practice" from
offline media of "allocating credit" for a sale to a certain
media despite a lack of proof the media should get any credit at
all...
The question you are begging if you are optimizing profits as opposed to sales or "exposure" is
this: where is true breakeven on the PPC? This of course depends on the margin of the business and the cost of the click, but there are several other dynamics in play, including the following:
1. The tendency of a 1st ranking PPC to deliver sub-optimal ROI due to "sport-clicking" by causal / newbie "surfers"
2. The tendency of lower-ranking PPC to deliver higher ROI due to bid gaps and (often) dramatically lower costs
3. The tendency of "deep searchers" - people who click on lower ranking organic and PPC links - to be further into the research cycle / more likely to convert to final objective
So, for example, take this 2003 test. A page ranked organically as #2 for a certain
high volume phrase. This same page content was used to create a landing page for a PPC campaign using the same search phrase.
The test was conducted in March, neutral seasonality for this business.
With a #1 PPC ranking, the PPC campaign generated 11% in incremental sales with a 60-day latency tail but had a negative 12% ROI on margin minus overhead (note specific use of the word incremental, which I will explain further below).
When this same listing was dropped down to "deep shoppers" at PPC rank #4 (in this case, 1st ad at bottom of Yahoo page), it generated 4% incremental sales with a 1786% ROI on margin minus overhead for the same 60-day latency tail. That's almost 18 : 1 payout.
Without the tail (first conversion only), it was 623% (~ 6 :1).
In addition, the "deep shopper" segment on average had a 70% repeat purchase rate as opposed to 58% for the #1 PPC position.
So even the "tail of the tail" was better at position 4. This was on the
highest volume search phrase for the site, so it made a huge impact on overall profitability.
Remember, the landing pages for both the high ranking Organic link and the #1 ranking PPC link were exactly the same - layout, copy, all of it.
Now, the reason I specifically used the word incremental is we had control, which
allowed us to prove out the real financial dark side of using PPC with Top 3 ranking
Organic listings.
When the #1 ranking PPC ran with the #2 Organic link, the sales volume coming from the PPC link ran about 43% versus 57% for the organic link.
This ties pretty closely with some recent studies on click behavior (on average, 60% click organic, 40% click paid).
But dig what this really means: if incremental sales are 11% versus control (no PPC) and sales volume with PPC is 43% PPC, that
means nearly 77% of PPC sales were stolen from the organic side - they would have happened anyway without the PPC link.
So about 3/4 of the PPC clicks are pure subsidy cost - costs incurred
to make a sale when the sale would have been made anyway.
Factor this "media cannibalization" into ROI, and now we're down around negative 48% ROI for the test #1 ranking PPC with organic at #2.
For every $1 we spend we lose 48 cents - 12 cents in tangible ROI, and 36 cents in "Opportunity ROI" - ROI we won't get because we wasted the click budget by not using it to buy "real" incremental clicks.
Hey, just increase the budget, we'll make it up on volume!
This is what makes me insane about the "best practice" of
allocating credit to a certain media when a visitor has been exposed
to both. Most of the time, it simply drives inflated
budgets. Oh, let's not forget the "ego spending"...
But honestly, I'm not sure the average person doing web traffic reporting would even think about the possibility of subsidy costs, because they have never done subsidy analysis and probably are not even aware of the issue.
They just report on what the traffic is up to and the clicks, paths, etc.
Heck, even most marketing folks probably don't know about it, unless they are database marketers.
Subsidy costs are a common issue in many database marketing programs where there is analysis beyond simple "reporting". This subsidy effect is especially common in best customer programs and the costs can be huge.
I've seen it many times so I guess I just have an eye for seeing potential for subsidy behavior in a situation and testing for it.
Bottom line - the subsidy effect is real, and if you are, let's say, e-mailing the same coupon to best customers that you are e-mailing to all customers, chances are the exact same thing is happening in this campaign - you are paying for sales that would have happened anyway with lower margins. This is
particularly true if your e-mail campaigns are based on some kind of "calendar" timing as opposed to timing based on the behavior of the customer.
This "calendar behavior" is known as "coupon proneness", meaning the customer knows a coupon is coming and "waits" for it in order to make a purchase they would have made anyway at full price.
These costs are real, they just don't "pop out" on any reporting.
If you are in this boat, you have plenty of company, most of the major online / offline retailers pay no attention to the subsidy effect, and it is literally costing them millions. How can I tell? Because of the timing and offers of the e-mail and direct mail they send me.
By the way, it's much easier to test for subsidy costs in e-mail and direct mail campaigns because it is much easier to create good control groups than it is for visitor analysis.
All the details on how to set up theses tests and measure subsidy costs, along with how to measure Halo Effects - revenue attributable to your campaigns you probably are also not
measuring - are in Chapter 29.
Subsidy Costs and Halo Effects are not trackable using standard web analytics reporting.
They require analysis, and you may have to go to the customer database to do this kind of stuff.
But you have a customer database, right? From the shopping cart or
fulfillment or something? Hope so.
Q: Can you think of a solid way this incrementality could be
tested for?
A: "Solid" way? Well, I guess that would depend on the technology that was available and the sales volume you are talking about. Not sure you can truly "A / B / C" it without some significant bid management / search engine API technology.
I'm not sure the engines would give up that kind of control - though I bet THEY have tested it at some level.
The above test was a 3 week manual "alternating days" test on Overture for a store with 500K - $1 million annual sales and
average order size about $72, so there wasn't a lot of room for high tech tools.
If you run A on Monday, B on Tuesday, control / C on Wednesday, then start over with A on
Thursday and continue this rotation, the following Monday B will run and on Tuesday C will run etc., so by the end of 3 weeks you will have A, B, and Control data normalized by day - both campaigns and control ran on every day of the week.
Not a statistically pure method, but not horrible practice, for sure - and cheap!
If the result spread (test versus control) is significant enough, as it was on this test, I'll give up points in accuracy to get closer to the "directional truth".
Each of the top 30 search phrases where there was a top 3 organic ranking for the
phrase was optimized in this way with the results very directionally consistent across all phrases.
It was almost always more profitable to have a lower than #1 paid ranking when a top 3 organic ranking was present.
Below the Top 30 phrases, some of the lower volume phrases produced inconsistent results which was probably a result of test
method error / lack of frequency.
While it may not be "practical" for large scale retailers to test like this, you would think for certain high volume phrases it would be worth poking around it a bit given the potential for cost savings.
Definitely not worth thinking about if sales is the focus, because (probably unprofitable) sales will be lost without the #1 PPC
listing, for sure.
Of course, changes in the way paid listings are displayed (often related to how many bidders there are) can change the outcome of this test.
The results on Google were also directionally consistent though less dramatic.
I assume this is because of the different PPC display approach versus Yahoo, but perhaps also due to some of the "alchemy"
Google uses to rank PPC ads, which are more difficult to control on Google.
And for sure, there are reasons people buy PPC other than to drive profits, so being #1 may be worth it, but the true costs should be
quantified through a test like this. Without question, if I
was going to "allocate credit" for conversion to a media, I
would insist on doing a test like this so that the allocation does not
turn out to be pie-in-the-sky.
In this case, credit was being given to PPC using an allocation
model,
as opposed to the true incrementality, with the result that the PPC budgets
were tremendously inflated. Web analytics reporting would not uncover
this issue; you have to understand the potential human behavior
involved and be able to conjure up the thesis for an analysis.
Then you have to figure out how to test the thesis.
Not that I'm a genius or anything, I'm pretty sure other
professional database marketers and analysts have already discovered
this subsidy effect. But what happens a lot in a data-driven
environment is the discoveries are so mission-critical, so potentially
valuable from a competitive standpoint that they are not discussed in
"public" because of the liability and potential company
backlash. Me, I don't have that issue because it's not obvious
where the data is coming from. However, you'll notice I waited
until 2006 to talk about a test that happened back in 2003.... ;)
Jim
-------------------------------
If you are a consultant, agency, or software developer with clients
needing action-oriented customer intelligence or High ROI Customer
Marketing program designs, click
here
-------------------------------
That's it for this month's edition of the Drilling Down newsletter.
If you like the newsletter, please forward it to a friend! Subscription instructions are top and bottom of this page.
Any comments on the newsletter (it's too long, too short, topic
suggestions, etc.) please send them right along to me, along with any
other questions on customer Valuation, Retention, Loyalty, and
Defection here.
'Til next time, keep Drilling Down!
- Jim Novo
Copyright 2006, The Drilling Down Project by Jim Novo. All
rights reserved. You are free to use material from this
newsletter in whole or in part as long as you include complete
credits, including live web site link and e-mail link. Please
tell me where the material will appear.
|