From bd545b898c2cc30aa742ecb59c8a568055919cf4 Mon Sep 17 00:00:00 2001 From: markzegarelli Date: Mon, 16 Sep 2024 10:02:08 -0700 Subject: [PATCH] Apply suggestions from code review Co-authored-by: SpencerFleury <159941756+SpencerFleury@users.noreply.github.com> --- content/collections/analytics/en/product-analytics.md | 2 +- .../en/event-segmentation-in-line-events.md | 4 ++-- content/collections/experiment/en/cohort-targeting.md | 10 +++++----- .../journeys/en/journeys-understand-visualizations.md | 2 +- 4 files changed, 9 insertions(+), 9 deletions(-) diff --git a/content/collections/analytics/en/product-analytics.md b/content/collections/analytics/en/product-analytics.md index 98f025bd1..46aa396ef 100644 --- a/content/collections/analytics/en/product-analytics.md +++ b/content/collections/analytics/en/product-analytics.md @@ -19,7 +19,7 @@ This feature is available on all Amplitude plans. For more information, see the On the Basic Settings page, select the event that represents an active action in your product. By default, Amplitude sets `[Amplitude] Any Active Event` as the event. -Next, select the retention intervals that are most meaningful to you. Set both Daily and Weekly intervals. Use Amplitude's [usage interval analysis](/docs/analytics/charts/retention-analysis/retention-analysis-usage-interval) to determine how long users go between triggering your critical event. +Next, select the retention intervals that are most meaningful to you. Set both Daily and Weekly intervals. Use Amplitude's [usage interval analysis](/docs/analytics/charts/retention-analysis/retention-analysis-usage-interval) to learn how long users go between triggering your critical event. Configure breakdown properties for the Product Overview, Onboarding, and Retention views. Select up to three. diff --git a/content/collections/event-segmentation/en/event-segmentation-in-line-events.md b/content/collections/event-segmentation/en/event-segmentation-in-line-events.md index 45ff0c218..390db95e9 100644 --- a/content/collections/event-segmentation/en/event-segmentation-in-line-events.md +++ b/content/collections/event-segmentation/en/event-segmentation-in-line-events.md @@ -22,7 +22,7 @@ Follow these steps to add a custom event: 2. Next, click *Add event inline* to add a custom event. Add any number of custom events. {{partial:admonition type='note'}} - The in-line event that you create is relevant to that specific chart and isn't be accessible anywhere else unless you save it as a custom event.  + The in-line event that you create is relevant to that specific chart and isn't accessible anywhere else unless you save it as a custom event.  {{/partial:admonition}} 3. If desired, hover on the event and click *Filter* to add event properties. Add as many filter properties as needed for each in-line event. @@ -34,5 +34,5 @@ Follow these steps to add a custom event: 5. Click *Remove* to remove properties and in-line events, as needed. {{partial:admonition type="note" heading=""}} -Custom events can't contain other custom events. Also, *Show User Journeys*, *Explore Conversion Drivers* and *Show User Paths* aren't available on the Microscope for in-line event steps in funnels. +Custom events can't contain other custom events. Also, *Show User Journeys*, *Explore Conversion Drivers* and *Show User Paths* aren't available from the Microscope for in-line event steps in funnels. {{/partial:admonition}} \ No newline at end of file diff --git a/content/collections/experiment/en/cohort-targeting.md b/content/collections/experiment/en/cohort-targeting.md index 8708cded0..f4d29caa2 100644 --- a/content/collections/experiment/en/cohort-targeting.md +++ b/content/collections/experiment/en/cohort-targeting.md @@ -16,11 +16,11 @@ Experiment cohort targeting currently only supports targeting **user** cohorts. When you target a cohort in a remote evaluation flag, the cohort is automatically synced to the *Amplitude Experiment* destination. For dynamic cohorts, this sync runs hourly by default. This means that dynamic cohorts targeted in remote evaluation aren't real-time. For example, if you target a cohort of users who performed a `Sign Up` event, users are targeted within an hour of performing the event--not immediately after. -Cohorts targeted for remote evaluation may have a propagation delay on the initial sync or large change depending on the size of the difference. For example, syncing a cohort of 10 million users for the first time will take some time to initially sync fully. +Cohorts targeted for remote evaluation may have a propagation delay on the initial sync or large change, depending on the size of the difference. For example, the first sync of 10-million-user cohort is likely to take a lot more time than later syncs. **Use remote evaluation cohort targeting if...** -- You are targeting users based on user behavior or properties which aren't available in Experiment targeting segments. +- You are targeting users based on user behavior or properties that aren't available in Experiment targeting segments. - You are ok with some targeting delay introduced by cohort sync intervals. **Don't use remote evaluation cohort targeting if...** @@ -32,14 +32,14 @@ Cohorts targeted for remote evaluation may have a propagation delay on the initi Local evaluation flags and experiment that are deployed to up-to-date server-side SDKs can also target cohorts. When you target a cohort in a local evaluation flag, the cohort is automatically synced to the *Experiment Local Evaluation* destination. For dynamic cohorts, this sync runs hourly by default. This means that dynamic cohorts targeted in local evaluation aren't real-time. For example, if you target a cohort of users who performed a `Sign Up` event, users will be targeted within an hour of performing the event--not immediately after. {{partial:admonition type="note" heading="Cohorts only support User IDs"}} -Local evaluation cohorts currently only sync **User IDs** to the SDKs. This means that to target cohorts in local evaluation flags, you **must** include a User ID in the user object passed to the evaluate function. +Local evaluation cohorts currently only sync **user IDs** to the SDKs. This means that to target cohorts in local evaluation flags, you **must** include a user ID in the user object passed to the evaluate function. {{/partial:admonition}} ### SDK Support Server-side SDKs can target cohorts if configured to do so. Client-side SDKs don't currently support local evaluation cohort targeting. -On initialization, configure the cohort sync configuration with the project API and Secret key to enable local evaluation +On initialization, configure the cohort sync configuration with the project API and secret key to enable local evaluation cohort downloading and targeting. | SDK | Cohort Targeting | Version | @@ -119,7 +119,7 @@ experiment = AmplitudeExperiment.initialize_local('DEPLOYMENT_KEY', ## Troubleshooting -Troubleshooting cohort targeting can challenging due to the asynchronous nature of dynamic cohorts and cohort syncs in general. If you find that users that should be in the targeted cohort aren't being targeted... +Troubleshooting cohort targeting can challenging due to the asynchronous nature of dynamic cohorts and cohort syncs in general. If you find that your experiment isn't targeting users who should be in the targeted cohort ... - For local evaluation, check that the SDK version supports local evaluation cohort targeting, and that **the cohort sync config has been set on initialization**. - Check that **the cohort has the required sync** (*Amplitude Experiment* for remote evaluation, *Experiment Local Evaluation* for local evaluation). diff --git a/content/collections/journeys/en/journeys-understand-visualizations.md b/content/collections/journeys/en/journeys-understand-visualizations.md index e131d16b6..bb9f11c03 100644 --- a/content/collections/journeys/en/journeys-understand-visualizations.md +++ b/content/collections/journeys/en/journeys-understand-visualizations.md @@ -76,7 +76,7 @@ Note that each percentage on this chart refers to the percentage of all users in Next, 20.7% of all users triggered `taxonomy: view event detail panel`, followed by 17.5% triggering `open event dropdown`. This 17.5% represents 2,725 users. -Notice the same progressions in the Journey Map. Here we can also see that those 2,725 users took an average of 1 hour and 48 minutes to progress all the way through the path. +Notice the same progressions in the Journey Map. Those 2,725 users took an average of 1 hour and 48 minutes to progress all the way through the path. ## Differences from the legacy versions of these charts