Tuesday, April 13, 2010

The End to End Principle in Ad Exchange Design (The Thin Exchange, 2)

Where should decisions about which ad runs where be made? Ajay Sravanapudi, in his recent article on AdExchanger says
a DSP is really just a feature on an exchange... A DSP simply uses [RTB API] of an exchange to buy media and run campaigns more effectively. The exchange has an ad server that can deliver campaign pacing, frequency capping, targeting, etc. All that is missing is some intelligence to “auto-magically” buy media on behalf of the campaign... Dozens of ad networks have done this for years on things like YieldManager on the RightMedia Exchange (RMX). If we can simply layer this “auto-magical” intelligence on the exchange then there is no need to pay for a DSP.
I disagree. There's a sort of businessman's view here, where an intuition about power leads to an answer at odds with systemic efficiency.

Exchange 1.0 did not sell real-time. So the exchanges had to have rules in-system. Like the NASDAQ (and other limit order book markets), the exchange hosted the rules about who wanted to buy and who wanted to sell at any given price on any of the thousands of things traded there.

But RTB ad exchanges don't have thousands of things being traded, they have an almost unlimited number of things. Each ad impression--the placement, the context, the viewer--is different than every other. There is no commodity, so there can be no order book.

With an order book, the commodities had to be limited: i.e. "if the ad presented is on the front page of CNN.com, and the person viewing the ad is a male 18-34 years old, then bid $5.00 per thousand." That doesn't work when the ad presented is a photo of a red flower on Photobucket presented to a non-logged in 33 year old male in Northern New Jersey at 12:13am on a Sunday and who searched for gardening tools at an online retailer yesterday but didn't buy anything and whose circle of acquaintances includes several people who bought sunglasses this week." What ad you put in front of this person and at what price is a difficult problem, not one that can be reduced to simple rules.

The only way to move away from rule-driven trading is to use algorithms. On the buy side, developing the right algorithm requires a ton of experience, a ton of data from live campaigns (and the insight into how well each individual impression worked), a huge amount of experimentation, mathematical savvy, and a dose of genius. On the sell-side the algorithms are even more complicated to develop. The algorithms are where the intelligence is.

Where should these algorithms be run? Here's an analogy. Let's say you wanted your computer to run the algorithms. Where do you think they should be coded? In the operating system? Of course not. In the application layer? Almost certainly not. Obviously, you'd code them as routines to be run by a more general application, like Excel. The lower down the stack, the more generic the functionality should be. This is called the End to End Principle:
Using performance to justify placing functions in a low-level subsystem must be done carefully. Sometimes, by examining the problem thoroughly, the same or better performance enhancement can be achieved at the high level. Performing a function at a low level may be more efficient, if the function can be performed with a minimum perturbation of the machinery already included in the low-level subsystem, but just the opposite situation can occur – that is, performing the function at the lower level may cost more – for two reasons. First, since the lower level subsystem is common to many applications, those applications that do not need the function will pay for it anyway. Second, the low-level subsystem may not have as much information as the higher levels, so it cannot do the job as efficiently.
Saltzer, Reed and Clark in "End to End Arguments in System Design". This paper described an idea that has been central to internet architecture since early days: don't put in the center what can be done at the edges. Similarly, David Isenberg's "Rise of the Stupid Networks" (predicting that the internet would beat out the "smart" telecom nets.)

Okay, I can hear all you adtech gearheads: is the exchange really "low" level? It's probably the most complicated piece of software in the whole ecosystem.

But low-level in this argument really means that the functionality is used by the most end-user applications. It has nothing to do with how close to the hardware the function is (the two are correlated, but that's outside my scope.)

Clearly the exchange has the functionality that is shared by the most end-users. Each agent is different: different approaches, different algorithms, different in ways none of us has yet imagined. Why should we attempt to encode this as-yet-undetermined difference into the exchange? Putting DSP functionality into the exchange simply means that everyone has to pay for it, even if they don't use it. This means that other, better ways to be a DSP do not get developed, because then the customer has to pay twice: once for the DSP's DSP and once for the exchange's DSP.

And this brings me to my real beef with the idea that DSP functionality should be built into the exchange, or that any functionality outside of what is absolutely necessary should be built into the exchange. The internet has been so phenomenally successful because the low levels are bare boned and flexible. HTTP, FTP, POP, SMTP, DNS, IMAP, etc. have all been built on top of TCP because TCP does nothing but transfer data from one place to another. It doesn't have much expectation about what that data is or what it should do. If the internet's designers had made TCP more "intelligent", we probably would have never had the Web or Skype. Building low level functionality that is simple and allows layers to be built on top enables innovation. And heaven knows what we need right now in interactive advertising is some innovation.

I don't think the ad exchanges should layer in "auto-magical intelligence." I don't think they should layer in anything. I think they should start dumping functionality like Carl Fredricksen tossing furniture out of his house. The ad exchange should do three things. It should do them fast, it should do them cheaply and it should do them six-sigma. What the ad exchange should do, and all it should do, is cookie-match, cross and clear. The ad exchange should be thin, and as dumb as possible.

6 comments:

  1. You make strong arguments for why exchanges should not layer in features like DSP. What about features like data that BlueKai and eXalate etc. provide? The way I think about it is exchanges should provide basic data (behavioral history etc about the user) so that all players have the information to bid effectively and provide the best value to the seller much like on the stock exchanges everybody has access to financial metrics. That would mean current data providers become research analytics firms much like wall street research firms and provide buy/hold/sell recommendations (at various price points) though would have to be specific to advertisers/advertiser categories.

    ReplyDelete
  2. I think having the cookie mapping is essential because I can't think of any way for the data piece to work with a small time to execution otherwise.

    But the actual generation of usable data should not be within the exchange. That piece is still undergoing phenomenal innovation. Putting it into the exchange would stifle that.

    ReplyDelete
  3. DSP/Exchange should be separate but cookie mapping, to Jerry's point, is essential.

    ReplyDelete
  4. I agree with Jerry, the agents should live outside the exchange. Self optimization based on the end user needs is where the main value lies, and will be different for different constituents. Agents in the end is where the innovation will occur, some of which people haven't even yet thought about.
    Rick Landsman

    ReplyDelete
  5. Jerry,

    I was finally introduce to your blog this morning and could not be happier. Great stuff in here! So I am thinking that analytics is going to be a new layer in the stack. The core competency of DSP's will be aggregation of exchange inventory, data, and software to manage campaigns. Modeling will have to be done independently. Is it a service business?

    ReplyDelete
  6. Thanks, Craig!

    Just want to distinguish between analytics as measurement of results (i.e. PerformLine, Flurry, Marketspace) and analytics as analysis. Meaurement is, as you know, out there.

    Analysis/decision-making is definitely part of the stack. I think of the buy-side piece being in two parts: services and technology. The DSPs right now are doing both, out of necessity. But the two are not necessarily coupled. Think hedge fund versus Bloomberg... the hedge fund provides the analysis, Bloomberg provides a tech platform (ok, that's maybe oversimplified, but you know what I mean?)

    If there are DSPs that become more and more actual platforms, then there will definitely be a services layer to put the needed human intelligence to work. A big part of this will be the modeling, testing, measuring, modeling iteration. This is happening now, but extremely primitively. I mean, no offense to everyone who is doing it, but the smart people I talk to admit that there is still a huge gap between what the agencies do (the "modeling" of what people will respond to) and what the DSPs do (the analysis of what is working.)

    Bridging the gap between the psychological nature of why people pay attention and why they choose--the expertise of ad agency folk--and the use of data to predict results--the expertise of analysts--is well in the future. Huge opportunity there.

    ReplyDelete

Note: Only a member of this blog may post a comment.