Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[editor] add telemetry for more events #946

Merged
merged 1 commit into from
Jan 18, 2024
Merged

[editor] add telemetry for more events #946

merged 1 commit into from
Jan 18, 2024

Conversation

Ankush-lastmile
Copy link
Member

@Ankush-lastmile Ankush-lastmile commented Jan 16, 2024

[editor] add telemetry for more events

As discussed offline, we want to enable telemetry in the local editor for the following events:

  • RUN_PROMPT_START
  • RUN_PROMPT_CANCEL
  • RUN_PROMPT_ERROR
  • RUN_PROMPT_SUCCESS

This diff implements the logging telemetry for those events.

  • Add mode "local" to all telemetry sent from local editor in prod mode
  • Send telemetry log events after dispatching rather than before, so that the client render action gets called first.
    • This technically shouldn't matter since these are all async calls.
  • Note: This only implements logging for Local Editor. Next up is to add the datadog initialization to Gradio Workbook

Testplan

  1. yarn build -> run editor in 'prod' mode
  2. run prompt, cancel, rerun to success. Check telemetry sent to datadog.
extra.telemetry.testplan.mov

@Ankush-lastmile Ankush-lastmile changed the title {wip} [editor] add telemetry for more events [editor] add telemetry for more events Jan 17, 2024
As discussed offline, we want to enable telemetry in the local editor for the following events:

- RUN_PROMPT_START
- RUN_PROMPT_CANCEL
- RUN_PROMPT_ERROR
- RUN_PROMPT_SUCCESS

This diff implements the logging telemetry for those events.
- Send telemetry log events after dispatching rather than before, so that the client render action gets called first.
- - This technically shouldn't matter since these are all async calls.
- Note: This only implements logging for Local Editor. Next up is to add the datadog initialization to Gradio Workbook


## Testplan

1. yarn build -> run editor in 'prod' mode
2. run prompt, cancel, rerun to success. Check telemetry sent to datadog.

https://github.com/lastmile-ai/aiconfig/assets/141073967/52043456-3682-4dd7-b95b-846ad7480019
Copy link
Contributor

@rholinshead rholinshead left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accepting to unblock, but would be good to add additional data to the events, mainly so that we can understand which models/parsers are being run most (and which ones are errored/cancelled).

Long-term we can also consider logging a unique id for each distinct run -> success/error/cancel flow so that we can add monitoring for when models/parsers are errored/cancelled at a higher-than-normal rate


const onPromptError = (message: string | null) => {
dispatch({
type: "RUN_PROMPT_ERROR",
promptId,
message: message ?? undefined,
});
logEventHandler?.("RUN_PROMPT_ERROR");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we log the error message in the data as well? And maybe the prompt model

Copy link
Member Author

@Ankush-lastmile Ankush-lastmile Jan 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking so too about the message but I think right now the error logs pass the entire aiconfig. wasn't sure if we want to be logging that since we said we don't want to log the config. Thoughts?

wdyt Should we pass in the message?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the message includes the entire aiconfig, we shouldn't log that (and should probably fix to not do that). I was pretty sure it just returns an error message without config, though

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

going to ship this and then investigate this (and make change as necessary ontop)

@Ankush-lastmile
Copy link
Member Author

Ankush-lastmile commented Jan 18, 2024

  • added model to logging Run events. It wasn't straight forward to find the model so did some logic using stateaiconfig -> getPrompt -> getModelName
  • rebase

@Ankush-lastmile Ankush-lastmile merged commit 3e52fd5 into main Jan 18, 2024
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants