-
Notifications
You must be signed in to change notification settings - Fork 758
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpenRTB support: Bid cache #216
Comments
Got an interesting suggestion today from someone trying to help simplify the "what to cache?" options. It was: always cache the entire OpenRTB bid. Expose So for example:
This would basically reduce the complexity of Not saying it's a great idea... but it doesn't seem like a bad one, and it's creative enough I thought I'd mention it. |
Interesting concept but doesn't that that assume JSON is the structure? Also we are storing a lot of stuff we probably won't ever need. |
yeah... we talked about storing stuff we don't need, but the rest of the Bid is tiny compared to the creative markup. I am concerned about the JSON structure requirement, though. In order to return a Another concern came to mind after thinking about it some more:
Overall... I'm not too crazy with this idea. It seems to make things a tiny bit simpler for PBS, but much more complex for PBC. |
I think I've got something reasonable which hits all the use-cases. Originally, we can add an option like this inside
Depending on which use-cases makes sense in the future, we can expand the API to something like this without introducing any breaking changes:
Whenever we introduce |
Per the discussion in #199, |
per some more discussion with @mjacobsonny, I learned some things. Today, Prebid publishers expect:
There is a problem with this today. There is no good support for a Bidder who makes two bids on the same Imp for separate Deals. It was proposed to add a new key to support this:
However, he also described a business use-case for So... it's my opinion that improving Deal support should be a future improvement. Since we're not deprecating All that in mind... this is the behavior I implemented:
This will apply |
One last-minute change to this before merge... we replaced Besides just being a better name, this will save existing Prebid Mobile publishers from having to update their DFP creatives when they switch to OpenRTB. |
The
/openrtb2/auction
endpoint needs to support sending bids to the cache.Right now, I don't think we have a good enough sense of the cache's use-cases to build a quality feature here. This is the fundamental tradeoff:
If the cache behavior needs to be "chosen", because different behaviors have equal merit, then we have two options:
request.ext
However... I also want to note the following, which we should take very seriously:
To grow this project smoothly, our goal should be to identify the fewest "choices" necessary to encourage adoption on both sides.
These are things to consider:
The Prebid.JS philosophy is: "the publisher ultimately decides which bid wins." If PBS caches the bids, Prebid auctions will have overall better performance. However, it is possible for JS expose an API which caches only the bid which the publisher chooses.
The prebid-mobile philosophy is: "App code updates are expensive. It's better if the Server does everything, including picking the winning bid." As far as I can tell, there's no room for negotiation here.
@DucChau mentioned that Rubicon wanted an option to cache just the vast XML. Currently PBS caches a JSON blob with other info too.
There's some ongoing debate about the cache's usefulness for certain types of bids in Do not cache bids which round down to $0.00 CPM #199.
These are some deficiencies in the
/auction
endpoint's cache behavior. Ideally, we can do it better inopenrtb2/auction
./auction
doesn't let you cache just the vast XML. It requires the JSON blob. If this is a valid use-case, then the/openrtb2/auction
API should to support this too.@bretg and @mjacobsonny might want to discuss this. At the moment, I don't think it's fleshed out well enough for much useful engineering work to be done.
The text was updated successfully, but these errors were encountered: