-
Notifications
You must be signed in to change notification settings - Fork 244
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FLEDGE: restricted reporting, training of bidding models #93
Comments
Hi Jonasz, This is an interesting question. In the relevant section of your original Outcome-based TURTLEDOVE write-up, you said:
What ideas did you have for addressing this optimization problem in the Outcome-based approach, if the user signals are kept browser side, used solely for bidding, and never included in network requests? Since the actual bid is available at reporting time, there is of course some ability to approximate what the hidden on-browser signals could have been. But of course any user-specific data that we add creates a tracking risk. |
Hi Michael, At the time the problem of optimizing models was an open question (and it still very much is). We cannot say we had a clear solution in mind for that. In the recent months we were hoping the multi-browser aggregation service with access to all bidding signals could be the solution that would allow us to optimize models in a safe, private way (let's call this approach "unrestricted aggregates"). So the intent behind the original issue is: "if we are heading towards unrestricted aggregates, but with an interim restricted-reporting phase, this is a huge issue for us". We would likely have to rethink and rebuild significant parts of our system just for this interim phase. To clarify, we still don't need to and don't want to share userSignals with anyone - what we really need is to optimize models. While we were hoping to use the general aggregate reporting mechanism for that, this needs not to be the case. Model optimization is much different from other reporting use cases (like accounting), and is open to techniques like browser-side sampling, adding noise, browser-side aggregation, and potentially more. To prepare for FLEDGE, it is important for us that we understand:
That's a lot of questions, and I understand the timeline is narrow and the problem is complex. Perhaps this would be a good topic to discuss in a video conference? Best regards, |
I agree, we should talk about this in a live meeting, preferably one where @csharrison is present as well. Could you explain what you mean by "the approach of unrestricted aggregates" though? Are you referring to something like SPURFOWL, or a different idea? |
In my view SPURFOWL is orthogonal to this question. By "unrestricted" I mean "with no restrictions as to which signals can be used to build a report from within bidding_fn". In my original understanding, such "unrestricted reports" would later be available as aggregates, thus "unrestricted aggregates".
I was wondering, should we use the scheduled FLEDGE call slot (#88), or perhaps organize a separate call? |
Hi Michael, Thanks for the brief clarification during the FLEDGE call - as I understand, the target reporting mechanism is still an open question, and so is the timeline. We'd like to propose a minor yet powerful extension to the temporary event-level reporting:
Some thoughts:
This is just a basic high level idea, the final design would require some more work. (For example, it may be useful to specify fallback tokens in case Please let me know what you think! Best regards, |
Ah, interesting thought! It seems like the k-anonymity constraint would need to be applied to the complete set of information that can flow from the bidding function to the reporting environment. So if the whole tuple Does that match your thinking? From the implementation point of view, this is a somewhat different use of the k-anonymity infrastructure since it's more real-time. But in principle it makes sense to me. |
We were thinking about keeping the current specification of Conceptually, that could be thought of as two independent reporting channels:
This way, we are hoping to have a popularity threshold on
Note that |
Ah! Got it, I definitely misunderstood at first. Much easier to offer this signal if it's not joined with the event-level data. It sounds to me like you want the @csharrison and @shivanigithub, FYI for worklets using aggregation. |
Do you think it feasible to support such a This is a critical question for us, and if the Agg Service is not ready by that time, perhaps FLEDGE could temporarily reuse the popularity-counting infrastructure to provide (basic) support for |
Yes! I do think we will make this available before 3p cookies go away. |
Tying this conversation with other declarations. I am now confused about the timeline, particularly, related to what is going to be available for reporting. On Feb 17, during the WICG call, @michaelkleber, you said:
Yet, in this issue, you said:
What "this" refers to is quite unclear to me.
If my understanding is correct, this means that advertisers would have to build a different bidding model optimization based on Could you clarify the timeline with distinct periods and what will be available for reporting during each period? |
|
Hi Michael,
Thank you for sharing FLEDGE, the proposal looks very promising and we are committed to taking part in the proposed experimentation. After initial analysis, there is one important issue that we would like to raise.
FLEDGE proposes a temporary event-level reporting mechanism based on
report_win
, a js function supplied by the buyer.We are concerned about our ability to optimize our bidding models (for example click-through) in the proposed specification, which restricts the kind of data that may be reported. (For example:
report_win
has no access touser_bidding_signals
and toprev_wins
.)To effectively train our models, we need a reporting mechanism that is based on all the signals that are used by our bidding function (relevant issue: #54).
The explainer mentions that:
From our perspective such a reporting mechanism with access to user_bidding_signals (and all other signals) is critical before third party cookies are phased out. Otherwise we will have no way of training our models, and that will annul the usefulness of FLEDGE.
I was wondering, is it safe to assume such a reporting mechanism will be available before third party cookies are dropped? Or perhaps should we look for some ways to extend the temporary reporting mechanism?
Best regards,
Jonasz
The text was updated successfully, but these errors were encountered: