Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MM-57211] Fix issues with multiple transcription jobs per call #657

Merged
merged 1 commit into from
Mar 12, 2024

Conversation

streamer45
Copy link
Collaborator

@streamer45 streamer45 commented Mar 12, 2024

Summary

In case of multiple transcription jobs per call we were not writing the metadata correctly which would cause failure to attach and render the captions.

Ticket Link

https://mattermost.atlassian.net/browse/MM-57211

@streamer45 streamer45 added the 2: Dev Review Requires review by a core committer label Mar 12, 2024
@streamer45 streamer45 added this to the v0.26.0 / MM 9.8 milestone Mar 12, 2024
@streamer45 streamer45 requested a review from cpoile March 12, 2024 20:46
@streamer45 streamer45 self-assigned this Mar 12, 2024
@@ -109,7 +109,7 @@ func (p *Plugin) saveRecordingMetadata(postID, recID, trID string) error {
trID: tm.toMap(),
}
} else {
recordings[trID] = tm.toMap()
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the bug, an innocent typo.

@@ -17,21 +17,21 @@ import (

var callRecordingActionRE = regexp.MustCompile(`^\/calls\/([a-z0-9]+)/recording/(start|stop|publish)$`)

const recordingJobStartTimeout = 2 * time.Minute
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lowering this a bit since 2 minutes is just too much for a user to be waiting.

Comment on lines +59 to +65
// This is needed as we don't yet handle wsEventCallTranscriptionState on
// the client since jobs are coupled.
recClientState.Err = "failed to start transcriber job: timed out waiting for bot to join call"
p.publishWebSocketEvent(wsEventCallRecordingState, map[string]interface{}{
"callID": callID,
"recState": clientState.toMap(),
}, &model.WebsocketBroadcast{ChannelId: callID, ReliableClusterSend: true})
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is something I forgot to add originally. If the transcribing job fails first then we wouldn't notify the host as we are not handling the below wsEventCallTranscriptionState event.

@@ -26,7 +26,7 @@ func (p *Plugin) transcriptionJobTimeoutChecker(callID, jobID string) {

trState, err := state.getTranscription()
if err != nil {
p.LogError("failed to get transcription state", "error", err.Error())
p.LogWarn("failed to get transcription state", "err", err.Error(), "callID", callID, "jobID", jobID)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not asking for a change, but just curious why the change to LogWarn? If we return, it seems like an error makes sense?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question. The reason is that in most cases that's not really a sign of a problem. For example it will happen in any of these cases:

  • If host ends a call before the bot gets a chance to join.
  • If host stops and restart a recording before the first one started.
  • If any of the two coupled jobs (transcriptions and recording) fails as we also stop the other but don't explicitly cancel the timeout checker.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahh, okay, that makes sense. Thanks.

@streamer45 streamer45 added 3: Reviews Complete All reviewers have approved the pull request and removed 2: Dev Review Requires review by a core committer labels Mar 12, 2024
@streamer45 streamer45 merged commit 9e2d2c8 into main Mar 12, 2024
6 checks passed
@streamer45 streamer45 deleted the MM-57211 branch March 12, 2024 21:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
3: Reviews Complete All reviewers have approved the pull request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants