From 517cba53041c96a35e2dd8e4d9d8f286832e7f29 Mon Sep 17 00:00:00 2001 From: SpencerFleury <159941756+SpencerFleury@users.noreply.github.com> Date: Thu, 17 Oct 2024 13:14:08 -0700 Subject: [PATCH 1/2] various tickets DOC-262 DOC-220 DOC-293 DOC-288 DOC-276 DOC-292 --- .../analytics/en/behavior-offset.md | 62 +++++++++++-------- .../analytics/en/collaborate-with-spaces.md | 6 +- .../experiment-results-use-formula-metrics.md | 38 +++++++----- .../partners/en/integration-portal.md | 8 ++- .../collections/web_experiment/en/actions.md | 13 ++-- .../web_experiment/en/implementation.md | 12 ++-- .../web_experiment/en/performance.md | 12 ++-- .../en/set-up-a-web-experiment.md | 10 ++- .../web_experiment/en/targeting.md | 11 ++-- .../collections/web_experiment/en/tracking.md | 10 ++- .../en/multi-armed-bandit-experiments.md | 6 +- 11 files changed, 118 insertions(+), 70 deletions(-) diff --git a/content/collections/analytics/en/behavior-offset.md b/content/collections/analytics/en/behavior-offset.md index 08598733e..bf6c8066f 100644 --- a/content/collections/analytics/en/behavior-offset.md +++ b/content/collections/analytics/en/behavior-offset.md @@ -3,27 +3,29 @@ id: 26102630-8a48-4f72-bafe-35060873335b blueprint: analytic title: 'Behavior offset: Segment users over two distinct time periods' source: 'https://help.amplitude.com/hc/en-us/articles/360040965352-Behavior-offset-Segment-users-over-two-distinct-time-periods' +this_article_will_help_you: + - 'Track customer milestones' + - 'Compare user behavior between two cohorts' + - 'Understand the difference between behavior offset and behavioral cohorts' + - "Determine when using a behavior offset is appropriate, and when it isn't" +landing: false +exclude_from_sitemap: false +updated_by: 5817a4fa-a771-417a-aa94-a0b1e7f55eae +updated_at: 1729182351 --- -#### This article will help you: - -* Track customer milestones -* Compare user behavior between two cohorts -* Understand the difference between behavior offset and behavioral cohorts -* Determine when using a behavior offset is appropriate, and when it isn't - With Amplitude's [behavioral cohorts](/docs/analytics/behavioral-cohorts), you can create groups of users who share a pattern of behavior. The **behavior offset** feature gives you the power to further segment these users based on behaviors they've displayed in two distinct time periods. Common use cases for behavior offsets include: -* Identifying users who made at least one purchase during the past week (the current period) who have also made at least two more purchases over the preceding 90 days. This can be used as a proxy for customer satisfaction. -* Measuring users who installed the app but did *not* make a purchase during the 45 days after install. This information can help you target re-engagement campaigns with the goal of convincing these new users to convert into paying customers. -* For media companies, behavior offsets can identify users who purchase a subscription but don’t read an article in the following two weeks. This can help identify users who haven’t built a habit of reading content, who you can then target in your churn prevention efforts. -* Conversely, identifying users who read one article in a given week and then read two or more articles the following week can help you target engaged readers for subscription offers. +* Identifying users who made at least one purchase during the past week who have **also** made at least two more purchases over the preceding 90 days. This is a good proxy for customer satisfaction. +* Measuring users who installed the app but **didn't** make a purchase during the 45 days after install. This information can help you target re-engagement campaigns and convince these new users to convert into paying customers. +* For media companies, behavior offsets can identify users who purchase a subscription but don’t read an article in the following two weeks. This can help identify users who haven’t built a habit of reading content and target them in churn prevention efforts. +* Identifying users who read one article in a given week and then read two or more articles the following week can help you target engaged readers for subscription offers. -Notice that in each of these use cases, there are two cohorts in play, separated from each other by the passage of time: In the second example, they are: +In each of these use cases, there are two cohorts in play, separated from each other by the passage of time. In the second example, they are: * Users who installed the app, and -* Users who did not make a purchase in the subsequent 45 days +* Users who didn't make a purchase in the subsequent 45 days Behavior offsets make it easy to segment the users who appear in both of these cohorts. @@ -33,34 +35,42 @@ This feature is available to users on **Growth** and **Enterprise plans** only. ## Before you begin -Before getting started, we suggest you read up on [behavioral cohorts](/docs/analytics/behavioral-cohorts)—and [rolling windows](/docs/analytics/charts/event-segmentation/event-segmentation-interpret-2) until you're comfortable with both concepts. +Before getting started, you should read up on [behavioral cohorts](/docs/analytics/behavioral-cohorts) and [rolling windows](/docs/analytics/charts/event-segmentation/event-segmentation-interpret-2) until you're comfortable with both concepts. ## Add a behavior offset to an in-line cohort -In this section, we'll be following along with the first example in the bulleted list above: segmenting for users who made at least one purchase in the current period, but also made two or more purchases in the last 90 days. To add this behavior offset, follow these steps: +This section follows the first example in the bulleted list above: segmenting for users who made at least one purchase in the current period, but also made two or more purchases in the last 90 days. + +To add this behavior offset, follow these steps: 1. In the Segmentation Module, click *+ Performed* to begin creating a new [in-line cohort](/docs/analytics/behavioral-cohorts). -2. Start by defining the **previous period event**. In this example, our previous period event is `Complete Purchase`. +2. Define the **previous period event**. In this example, the previous period event is `Complete Purchase`. 3. Adjust the operator and event frequency, if necessary. 4. From the *More Options* menu for this event, click *Add rolling*. -![add rolling.png](/docs/output/img/analytics/add rolling.png) +![add rolling.png](/docs/output/img/analytics/add_rolling.png) -In the *rolling over* field that appears, enter the number of days you want to allow for the current period event to occur. Keeping with our example scenario, we enter 90. +In the *rolling over* field, enter the number of days you want to allow for the current period event to occur. In this example scenario, that value is 90. -**NOTE:** You can change the default durations from per day to weekly, hourly, monthly, or quarterly by changing the setting in the Metrics Module's date picker. - -Also, when using a *during* cohort (as opposed to *in each*; [see this article on in-line cohorts in Amplitude for a more detailed explanation](/docs/analytics/behavioral-cohorts)), the *offset* function will allow you to create daily offsets, regardless of the interval specified in the date picker. +{{partial:admonition type='note'}} +You can change the default durations from per day to weekly, hourly, monthly, or quarterly by changing the setting in the Metrics Module's date picker. +{{/partial:admonition}} + +Also, when using a *during* cohort (as opposed to *in each*; [see this article on in-line cohorts in Amplitude for a more detailed explanation](/docs/analytics/behavioral-cohorts)), the *offset* function allows you to create daily offsets, regardless of the interval you set in the date picker. ![behavioral_offset_duration_dropdown.png](/docs/output/img/analytics/behavioral_offset_duration_dropdown.png) -5. From the same menu as in step 4, click *Add offset*. In the *offset* field that appears, enter the number of days after the occurrence of the previous period event that you want to wait before the rolling window (see step 4 above) begins. In this example, we do not want to include the day of the previous period event in the rolling window, so we enter 1. If you want to include that day in your analysis, do not include an offset. -6. Click *+ Performed* again. From the *and who performed* drop-down, select the **current period event**. Continuing with our example, this will once again be `Complete Purchase`. +5. From the same menu as in step 4, click *Add offset*. In the *offset* field, enter the number of days after the occurrence of the previous period event that you want to wait before the rolling window (see step 4 above) begins. + + In this example, the day of the previous period event doesn't belong in the rolling window, so enter 1. If you want to include that day in your analysis, don't include an offset. +6. Click *+ Performed* again. From the *and who performed* drop-down, select the **current period event**. Continuing with the example, this is `Complete Purchase`. -Your in-line cohort— in which you’re segmenting for users who completed one or more purchases in the current period, and completed two or more purchases in the previous period (90 days, in this case)—should now look like this: +Your in-line cohort—in which you’re segmenting for users who completed one or more purchases in the current period, and completed two or more purchases in the previous period (90 days, in this case)—should now look like this: ![in-line cohort segment.png](/docs/output/img/analytics/in-line cohort segment.png) -By applying the rolling window and offset to the previous period event, we've essentially shifted this cohort to a time in the past. Bear in mind that this process offsets **only** the in-line cohort, and **not** the event selected in the Event Module. +By applying the rolling window and offset to the previous period event, you've essentially shifted this cohort to a time in the past. This process offsets **only** the in-line cohort, and **not** the event selected in the Event Module. -**NOTE:** In-line offset for *in each* cohorts is available on **Event Segmentation** charts. In-line offset for *during* cohorts is available on **all chart types except** Compass. When using a *during* cohort, your date range must be defined in days. +{{partial:admonition type='note'}} +In-line offset for *in each* cohorts is available on **Event Segmentation** charts. In-line offset for *during* cohorts is available on **all chart types except** Compass. When using a *during* cohort, you must define your date range in days. +{{/partial:admonition type='note'}} \ No newline at end of file diff --git a/content/collections/analytics/en/collaborate-with-spaces.md b/content/collections/analytics/en/collaborate-with-spaces.md index e1ad9cfe3..5cec646dd 100644 --- a/content/collections/analytics/en/collaborate-with-spaces.md +++ b/content/collections/analytics/en/collaborate-with-spaces.md @@ -9,7 +9,7 @@ this_article_will_help_you: landing: false exclude_from_sitemap: false updated_by: 5817a4fa-a771-417a-aa94-a0b1e7f55eae -updated_at: 1725396408 +updated_at: 1729181305 --- This article explains how to take advantage of the different features offered by [spaces](/docs/get-started/spaces) before continuing. @@ -41,7 +41,9 @@ A shortcut is a way to add content to multiple spaces and folders. Anyone can cr ## Manage space members -You can add new members to your space, or manage the access permissions of current space members, through the *Manage Members* button. +You can add new members to your space, or manage the access permissions of current space members, through the *Manage Members* button. + +When you add a member to a space as a viewer or editor, they can view or edit all content within that space. The only exception is when the member lacks permissions for a particular project with content stored in the space. **Project permissions always take priority over space-level permissions.** There are three levels of permissions: diff --git a/content/collections/experiment-results/en/experiment-results-use-formula-metrics.md b/content/collections/experiment-results/en/experiment-results-use-formula-metrics.md index ce628526c..3e8830caf 100644 --- a/content/collections/experiment-results/en/experiment-results-use-formula-metrics.md +++ b/content/collections/experiment-results/en/experiment-results-use-formula-metrics.md @@ -9,7 +9,7 @@ this_article_will_help_you: landing: true exclude_from_sitemap: false updated_by: 5817a4fa-a771-417a-aa94-a0b1e7f55eae -updated_at: 1725919213 +updated_at: 1729182856 landing_blurb: 'Understand the different kinds of formula metrics supported by the Experiment Results chart' --- In an Experiment Results chart, using a **formula metric** offers you greater flexibility when performing analyses. A formula metric is a metric that consists of: @@ -43,7 +43,7 @@ You can also view this metric in the [object management center](/docs/data/objec Experiment Results supports the formula functions listed here: -**UNIQUES:** +**UNIQUES** **Syntax**: UNIQUES(event) @@ -51,7 +51,7 @@ Experiment Results supports the formula functions listed here: Returns the number of unique users who triggered the event. -**TOTALS:** +**TOTALS** **Syntax**: TOTALS(event) @@ -59,7 +59,7 @@ Returns the number of unique users who triggered the event. Returns the total number of times users triggered the event. -**PROPSUM:** +**PROPSUM** **Syntax**: PROPSUM(event) @@ -69,7 +69,7 @@ This function only works when grouping by a numerical property on the event. If Returns the sum of the property values you're grouping the specified event by. -**PROPAVG:** +**PROPAVG** **Syntax**: PROPAVG(event) @@ -77,21 +77,31 @@ Returns the sum of the property values you're grouping the specified event by. This function only works when grouping by a numerical property on the event. If grouping by multiple properties, the formula runs the calculation with the first group-by clause. -Returns the average of the property values you're grouping by. This function is equivalent to `PROPSUM(event)/TOTALS(event)`. [Learn more about how Amplitude calculates PROPAVG and PROPSUM in this article](/docs/feature-experiment/under-the-hood/experiment-analysis-chart-calculation) +Returns the average of the property values you're grouping by. This function is the same as `PROPSUM(event)/TOTALS(event)`. [Learn more about how Amplitude calculates PROPAVG and PROPSUM in this article](/docs/feature-experiment/under-the-hood/experiment-analysis-chart-calculation) -### PROPMAX +**PROPCOUNT** -**Syntax**: PROPMAX(event) +**Syntax: PROPCOUNT(event)** + +* Event: Refers to the event that interests you. This must be a letter that corresponds to an event in the Events card. The event property must be a number. If grouping by multiple properties, the formula runs the calculation with the first group by clause. + + Returns the number of distinct property values for the property the event is grouped by. + + Note that PROPCOUNT is an estimate of distinct property values. This estimate comes from a [HyperLogLog algorithm](https://en.wikipedia.org/wiki/HyperLogLog), and its accuracy depends on amount of data it has to work with. Expect a relative error in the range of 0.1% for less than 12,000 unique values, and up to 0.5% for more than 12,000 unique property values, depending on the cardinality of the property. + +**PROPMAX** + +**Syntax:** PROPMAX(event) * **Event:** Returns the maximum value of the property you're grouping the specified event by. The property must be numeric. If grouping by multiple properties, the calculation uses the first group-by clause. -### PROPMIN +**PROPMIN** -**Syntax**: PROPMIN(event) +**Syntax:** PROPMIN(event) * **Event:** Returns the minimum value of the property you're grouping the specified event by. The property must be numeric. If grouping by multiple properties, the calculation uses the first group-by clause. -**CONVERSIONRATE (closed beta):** +**CONVERSIONRATE (closed beta)** **Syntax:** CONVERSIONRATE(array of events, conversion window, latency offset) @@ -107,7 +117,7 @@ Returns the conversion rate (< 1) from 1st event to nth event of the array. This ![](/docs/output/img/experiment-results/23576087044507) -**CONVERSIONAVG (closed beta):** +**CONVERSIONAVG (closed beta)** **Syntax:** CONVERSIONAVG(array of events, conversion window, latency offset) @@ -135,7 +145,7 @@ In your formulas, refer to events selected in the Events Module by their corresp ## How Amplitude calculates experiment data for formula metrics -Before getting into how calculations of formula metrics work with experiment data, it’s important to understand the overall [Experiment Analysis view](/docs/feature-experiment/analysis-view), which provides details for your experiment. +Before getting into how calculations of formula metrics work with experiment data, it’s important to understand the [Experiment Analysis view](/docs/feature-experiment/analysis-view), which provides details for your experiment. For formula metrics, Amplitude computes the results for each function independently to find the mean and variance of each one. It then applies the arithmetic operators to the results of these individual functions. @@ -164,6 +174,6 @@ If you set X equal to TOTALS(A) and Y equal to TOTALS(B), the following statemen Variance: ![](/docs/output/img/experiment-results/23576087077403){.inline} Mean: `E[X / Y] = E[X] / E[Y]` -Once you have the mean and variance of the overall formula metric, you can calculate the confidence interval chart and the p-values. +Once you have the mean and variance of the formula metric, you can calculate the confidence interval chart and the p-values. `Formula / Metric: TOTALS(A) / TOTALS(B)` \ No newline at end of file diff --git a/content/collections/partners/en/integration-portal.md b/content/collections/partners/en/integration-portal.md index e659175e9..a396d6396 100644 --- a/content/collections/partners/en/integration-portal.md +++ b/content/collections/partners/en/integration-portal.md @@ -5,13 +5,17 @@ title: 'Integration Portal' landing: true landing_blurb: 'The Integration Portal is the starting point of your integration with Amplitude.' source: 'https://www.docs.developers.amplitude.com/partners/integration-portal/' -updated_by: 0c3a318b-936a-4cbd-8fdf-771a90c297f0 -updated_at: 1718647261 +updated_by: 5817a4fa-a771-417a-aa94-a0b1e7f55eae +updated_at: 1729184489 --- The Amplitude Integration Portal is your gateway to enhancing collaboration and integration possibilities with Amplitude. As a valued partner, this portal opens up a world of opportunities to expand data connections for Amplitude customers while simplifying the integration process. With the Integration Portal, partners gain access to a range of tools and resources that simplify and speed up the integration process. These tools typically include comprehensive documentation, code samples, and best practices, which guide developers through the necessary steps for integration. +{{partial:admonition type='note'}} +Access to the Integration Portal is by invitation only, and requires a minimum of three identifiable potential customers. These can be either existing or prospective Amplitude customers. +{{/partial:admonition}} + ## Getting started Amplitude aims to make it easy for partners like yourself to self-define and add all the contextual information for their integration tile in the Amplitude app. Using the Integration Portal, you can: diff --git a/content/collections/web_experiment/en/actions.md b/content/collections/web_experiment/en/actions.md index fd75e2265..d22b87cb5 100644 --- a/content/collections/web_experiment/en/actions.md +++ b/content/collections/web_experiment/en/actions.md @@ -1,15 +1,18 @@ --- id: 3ef0ccc6-5e0f-435b-9184-edb809f19210 blueprint: web_experiment -title: Web Experiment Actions -updated_by: 0c3a318b-936a-4cbd-8fdf-771a90c297f0 -updated_at: 1728666798 +title: 'Web Experiment actions' +updated_by: 5817a4fa-a771-417a-aa94-a0b1e7f55eae +updated_at: 1729195880 --- - Actions define how variants modify your site. Actions relate to variants rather than a specific page, and apply to all pages that you target in your experiment. Experiment applies variant actions during evaluation. This happens on the initial page load and any time state pushes to or pops from the session history. History state changes also cause the SDK to revert all applied element change and custom code actions before reevaluating and reapplying actions with the update page in mind. +{{partial:admonition type='note'}} +See [Amplitude's pricing page](https://amplitude.com/pricing) to find out if this feature is available on your Amplitude plan. +{{/partial:admonition}} + ## Element changes Element changes modify existing elements on your site. Web Experiment applies these changes by editing the inner text of an element or appending style to the element based on the change you make in the visual editor. @@ -295,4 +298,4 @@ utils.waitForElement("body").then(function (body) { ``` {{/partial:tab}} -{{/partial:tabs}} +{{/partial:tabs}} \ No newline at end of file diff --git a/content/collections/web_experiment/en/implementation.md b/content/collections/web_experiment/en/implementation.md index 52b40269b..17a974ad4 100644 --- a/content/collections/web_experiment/en/implementation.md +++ b/content/collections/web_experiment/en/implementation.md @@ -1,14 +1,18 @@ --- id: a5dc1793-29f7-4c23-a656-459def9c6b3f blueprint: web_experiment -title: Implement Web Experiment -updated_by: 0c3a318b-936a-4cbd-8fdf-771a90c297f0 -updated_at: 1728666781 +title: 'Implement Web Experiment' +updated_by: 5817a4fa-a771-417a-aa94-a0b1e7f55eae +updated_at: 1729196004 --- Amplitude's Web Experimentation requires a standalone script that you must add to your website. Paste the script into the `
` element of your site, as high as possible to avoid flickering. The script tracks [impression events](/docs/experiment/web/tracking) with the [Browser SDK](/docs/sdks/analytics/browser/browser-sdk-2) already installed on your site, or a [third-party analytics SDK](#integrate-with-a-third-party-cdp). +{{partial:admonition type='note'}} +See [Amplitude's pricing page](https://amplitude.com/pricing) to find out if this feature is available on your Amplitude plan. +{{/partial:admonition}} + ## Add the experiment script Replace `API_KEY` with your project's API key in one of the synchronous scripts below, depending on your region: @@ -156,4 +160,4 @@ Tag managers, like Google Tag Manager load scripts asynchronously, which causes Implementing Web Experiment with a tag manager will cause flicker. Only use a tag manager when getting started, if adding the script to the site is out of the question in the short-term. {{/partial:admonition}} -Use a [custom HTML tag](https://support.google.com/tagmanager/answer/6107167) to add the script using GTM. +Use a [custom HTML tag](https://support.google.com/tagmanager/answer/6107167) to add the script using GTM. \ No newline at end of file diff --git a/content/collections/web_experiment/en/performance.md b/content/collections/web_experiment/en/performance.md index 7cfaf9baf..3a038354c 100644 --- a/content/collections/web_experiment/en/performance.md +++ b/content/collections/web_experiment/en/performance.md @@ -1,12 +1,16 @@ --- id: 7ade889d-c09c-48ee-8910-c4592bcc09b0 blueprint: web_experiment -title: Web Experiment Performance -updated_by: 0c3a318b-936a-4cbd-8fdf-771a90c297f0 -updated_at: 1728666816 +title: 'Web Experiment performance' +updated_by: 5817a4fa-a771-417a-aa94-a0b1e7f55eae +updated_at: 1729195974 --- Web Experiment is built to minimize impact on page performance. +{{partial:admonition type='note'}} +See [Amplitude's pricing page](https://amplitude.com/pricing) to find out if this feature is available on your Amplitude plan. +{{/partial:admonition}} + ## Script size The Web Experiment script is dynamic, and includes all your experiment configurations to avoid making multiple synchronous downloads. This means that the script size starts with a base size, and scales with each experiment. @@ -44,4 +48,4 @@ The cache control response header that configures browser caching is: ## Evaluation -Web Experiment evaluation runs locally with information available synchronously in the browser. As a result, evaluation is CPU bound and usually takes less than 1ms to evaluate and apply variant actions. +Web Experiment evaluation runs locally with information available synchronously in the browser. As a result, evaluation is CPU bound and usually takes less than 1ms to evaluate and apply variant actions. \ No newline at end of file diff --git a/content/collections/web_experiment/en/set-up-a-web-experiment.md b/content/collections/web_experiment/en/set-up-a-web-experiment.md index 16f25fe0d..849c86a90 100644 --- a/content/collections/web_experiment/en/set-up-a-web-experiment.md +++ b/content/collections/web_experiment/en/set-up-a-web-experiment.md @@ -3,12 +3,16 @@ id: b8db5ecf-b7b0-432d-b1f3-19ae70d13291 blueprint: web_experiment title: 'Set up a web experiment' updated_by: 5817a4fa-a771-417a-aa94-a0b1e7f55eae -updated_at: 1729099005 +updated_at: 1729195945 this_article_will_help_you: - 'Understand the difference between a Web Experiment and a feature experiment' - 'Build a Web Experiment using the Visual Editor' --- -Amplitude **Web Experiment** lets you create an A/B or multi-armed bandit experiment **without new code**. Open your site in the [Visual Editor](#the-visual-editor), choose the elements you’d like to experiment with, and make changes to their content or properties directly. This allows for less-technical users to easily create experiments without engineering resources. +Amplitude **Web Experiment** lets you create an A/B or [multi-armed bandit experiment](/docs/feature-experiment/workflow/multi-armed-bandit-experiments) **without new code**. Open your site in the [Visual Editor](#the-visual-editor), choose the elements you’d like to experiment with, and make changes to their content or properties directly. This allows for less-technical users to easily create experiments without engineering resources. + +{{partial:admonition type='note'}} +See [Amplitude's pricing page](https://amplitude.com/pricing) to find out if this feature is available on your Amplitude plan. +{{/partial:admonition}} ## Before you begin @@ -24,7 +28,7 @@ To set up a web experiment, follow these steps: 2. In the *New Experiment* modal, give your experiment a name. Enter the URL for a page this experiment targets—Amplitude must be instrumented on that page—and select the appropriate project from the drop-down. 3. If the script is present on the page you specified, Amplitude Experiment will open the page in the [Visual Editor](#the-visual-editor), as a new variant in your experiment. - You have two options for the treatment variant action: [element changes](/docs/web-experiment/actions#element-changes) or [URL redirect](/docs/web-experiment/actions#url-redirect). Lets assume we're changing elements. + You have two options for the treatment variant action: [element changes](/docs/web-experiment/actions#element-changes) or [URL redirect](/docs/web-experiment/actions#url-redirect). ![web-exp-1.png](/docs/output/img/workflow/web-exp-1.png) diff --git a/content/collections/web_experiment/en/targeting.md b/content/collections/web_experiment/en/targeting.md index 5723d030f..dde213fab 100644 --- a/content/collections/web_experiment/en/targeting.md +++ b/content/collections/web_experiment/en/targeting.md @@ -1,13 +1,16 @@ --- id: 671d5f19-2b8a-463a-95be-f81de05e0860 blueprint: web_experiment -title: Web Experiment Targeting -updated_by: 0c3a318b-936a-4cbd-8fdf-771a90c297f0 -updated_at: 1728666803 +title: 'Web Experiment targeting' +updated_by: 5817a4fa-a771-417a-aa94-a0b1e7f55eae +updated_at: 1729195990 --- - Web Experiments target both pages and audiences. Amplitude evaluates page targeting first, then audience targeting. Both targeting methods evaluate locally in the browser when the page first loads. +{{partial:admonition type='note'}} +See [Amplitude's pricing page](https://amplitude.com/pricing) to find out if this feature is available on your Amplitude plan. +{{/partial:admonition}} + ## Page targeting By default, a new Web Experiment targets the URL set on creation. This is the same URL that both the visual editor and Test & Preview tool use. To target multiple pages on your site, configure additional targeting rules. diff --git a/content/collections/web_experiment/en/tracking.md b/content/collections/web_experiment/en/tracking.md index 2b98e4ab1..16ceb6f82 100644 --- a/content/collections/web_experiment/en/tracking.md +++ b/content/collections/web_experiment/en/tracking.md @@ -1,12 +1,16 @@ --- id: 23ff249c-45ab-488a-b8aa-ae8fde85249d blueprint: web_experiment -title: Web Experiment Event Tracking -updated_by: 0c3a318b-936a-4cbd-8fdf-771a90c297f0 -updated_at: 1728666820 +title: 'Web Experiment event tracking' +updated_by: 5817a4fa-a771-417a-aa94-a0b1e7f55eae +updated_at: 1729195928 --- Web Experiment uses impression events for analysis and billing purposes. Impression events are tracked by the Web Experiment script through the [integration](/docs/web-experiment/implementation#integrate-with-a-third-party-cdp). Tracking impression events is required for experiment analysis. +{{partial:admonition type='note'}} +See [Amplitude's pricing page](https://amplitude.com/pricing) to find out if this feature is available on your Amplitude plan. +{{/partial:admonition}} + ## Impressions The impression event is the same as the Feature Experiment [exposure event](/docs/feature-experiment/under-the-hood/event-tracking#exposure-events), but has a different event type, `[Experiment] Impression`. Impression events contain the **flag key** and the **variant** of the flag or experiment that the user has been exposed to in the event's event properties. diff --git a/content/collections/workflow/en/multi-armed-bandit-experiments.md b/content/collections/workflow/en/multi-armed-bandit-experiments.md index 38e895cde..944418ac9 100644 --- a/content/collections/workflow/en/multi-armed-bandit-experiments.md +++ b/content/collections/workflow/en/multi-armed-bandit-experiments.md @@ -3,14 +3,14 @@ id: d368cb08-20c7-424f-ba2f-3d902fc10cb6 blueprint: workflow title: 'Multi-armed bandit experiments' landing: false -updated_by: 0c3a318b-936a-4cbd-8fdf-771a90c297f0 -updated_at: 1718772564 +updated_by: 5817a4fa-a771-417a-aa94-a0b1e7f55eae +updated_at: 1729185255 --- In a traditional A/B test, Amplitude Experiment assesses all the variants in your experiment until it reaches a statistically significant result. From there, you can choose to roll out the winning variant, or roll all users back to the control variant instead. Your decisions depend on why a particular variant outperformed the others. But sometimes, that reason isn’t relevant. All you want is to decide which variant is performing the best and send as much traffic as possible to it. For example: -- Optimizing hero images, messaging, color changes to UI elements, etc. +- [Optimizing hero images, messaging, color changes to UI elements](/docs/web-experiment/set-up-a-web-experiment), etc. - In-product layout changes, like information hierarchy or order of operations - Optimizing menus or navigation - Ad optimization for seasonal or time-sensitive promotions or events From a1d490cb1b67aba6e8e50abb111d6bbaff07ca2b Mon Sep 17 00:00:00 2001 From: SpencerFleury <159941756+SpencerFleury@users.noreply.github.com> Date: Thu, 17 Oct 2024 13:22:51 -0700 Subject: [PATCH 2/2] Update content/collections/experiment-results/en/experiment-results-use-formula-metrics.md Co-authored-by: markzegarelli