-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MM-57211] Fix issues with multiple transcription jobs per call #657
Conversation
@@ -109,7 +109,7 @@ func (p *Plugin) saveRecordingMetadata(postID, recID, trID string) error { | |||
trID: tm.toMap(), | |||
} | |||
} else { | |||
recordings[trID] = tm.toMap() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the bug, an innocent typo.
@@ -17,21 +17,21 @@ import ( | |||
|
|||
var callRecordingActionRE = regexp.MustCompile(`^\/calls\/([a-z0-9]+)/recording/(start|stop|publish)$`) | |||
|
|||
const recordingJobStartTimeout = 2 * time.Minute |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lowering this a bit since 2 minutes is just too much for a user to be waiting.
// This is needed as we don't yet handle wsEventCallTranscriptionState on | ||
// the client since jobs are coupled. | ||
recClientState.Err = "failed to start transcriber job: timed out waiting for bot to join call" | ||
p.publishWebSocketEvent(wsEventCallRecordingState, map[string]interface{}{ | ||
"callID": callID, | ||
"recState": clientState.toMap(), | ||
}, &model.WebsocketBroadcast{ChannelId: callID, ReliableClusterSend: true}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is something I forgot to add originally. If the transcribing job fails first then we wouldn't notify the host as we are not handling the below wsEventCallTranscriptionState
event.
@@ -26,7 +26,7 @@ func (p *Plugin) transcriptionJobTimeoutChecker(callID, jobID string) { | |||
|
|||
trState, err := state.getTranscription() | |||
if err != nil { | |||
p.LogError("failed to get transcription state", "error", err.Error()) | |||
p.LogWarn("failed to get transcription state", "err", err.Error(), "callID", callID, "jobID", jobID) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not asking for a change, but just curious why the change to LogWarn? If we return, it seems like an error makes sense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question. The reason is that in most cases that's not really a sign of a problem. For example it will happen in any of these cases:
- If host ends a call before the bot gets a chance to join.
- If host stops and restart a recording before the first one started.
- If any of the two coupled jobs (transcriptions and recording) fails as we also stop the other but don't explicitly cancel the timeout checker.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ahh, okay, that makes sense. Thanks.
Summary
In case of multiple transcription jobs per call we were not writing the metadata correctly which would cause failure to attach and render the captions.
Ticket Link
https://mattermost.atlassian.net/browse/MM-57211