You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Some thoughts & clarifications on the Google's FLoC experiment A/B experiment:
What is the all else equal?
In the FLoC group, what elements of identity are preserved in the experiment that would be lost in the full application of privacy sandbox. For instance, how did the experiment handle ad frequency capping?
Internal & external validity
The experiment operated on a sub-sub-sub set of Chrome's logged in users as I understand it. This would be useful to clarify and provide evidence of experimental internal validity. Also, given the narrow set of users who are included this raises concerns about external validity. For instance, the experiment somehow excluded users who were eligible for retargeting, so what share of users remained and how many of them had Google behavioral segments attached?
Comparing converions
Conversions vary in value. Google presented a single metric (cost per conversion) in evaluating the experiment. In Ravichandran's presentation, he explained that Google used some similarity metric to ensure comparability of the experimental outcomes. How was this done?
Marketplace spillovers
One challenge in evaluating the effectiveness of proposals is accounting for the general equilibrium effect where advertisers reallocate their budgets in or out of the market in response to the offering quality (see e.g. Blake and Coey (2014)). In this instance, were advertiser budgets able to move across groups or were they held equal (proportionately) across groups? How does this affect the interpretation of the findings?
The text was updated successfully, but these errors were encountered:
Some thoughts & clarifications on the Google's FLoC experiment A/B experiment:
What is the all else equal?
In the FLoC group, what elements of identity are preserved in the experiment that would be lost in the full application of privacy sandbox. For instance, how did the experiment handle ad frequency capping?
Internal & external validity
The experiment operated on a sub-sub-sub set of Chrome's logged in users as I understand it. This would be useful to clarify and provide evidence of experimental internal validity. Also, given the narrow set of users who are included this raises concerns about external validity. For instance, the experiment somehow excluded users who were eligible for retargeting, so what share of users remained and how many of them had Google behavioral segments attached?
Comparing converions
Conversions vary in value. Google presented a single metric (cost per conversion) in evaluating the experiment. In Ravichandran's presentation, he explained that Google used some similarity metric to ensure comparability of the experimental outcomes. How was this done?
Marketplace spillovers
One challenge in evaluating the effectiveness of proposals is accounting for the general equilibrium effect where advertisers reallocate their budgets in or out of the market in response to the offering quality (see e.g. Blake and Coey (2014)). In this instance, were advertiser budgets able to move across groups or were they held equal (proportionately) across groups? How does this affect the interpretation of the findings?
The text was updated successfully, but these errors were encountered: