You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In general, mandatory success is divided into two sub-functions:
Add a new task status called forced_success. For tasks that fail in the workflow, you can manually modify their status to forced_successr. If the workflow is still running, subsequent dependencies can continue to execute.
Add a new execution command called resume_from_forced_success. That is, when the workflow is stopped, if it contains a node that is forced to success, then the execution of the workflow can be resumed.
Added an interface to modify the task status. For task instances of typeIsFailure, you can modify the status to FORCED_SUCCESS
Specifically, directly modify the status of the failed task instance at the API-SERVER layer, and the log is recorded in the API-SERVER. With this as the front-end, there are two situations in the background:
If the workflow instance to which the task instance belongs is in a stopped state**:
You can continue to run down from the node where the forcing succeeded, see section "2" below.
If the corresponding workflow instance is running:
masterExecThread will continue to check in the database whether any failed tasks in the completeTaskList have been forced to succeed, and once detected, continue to sumbit the node after the task instance; For tasks with failed retry, if the task is currently in progress Within the retry interval, and then the user forces the last task instance to succeed, then the subsequent retry will not continue, and the subsequent node will be submitted instead.
Modified the execute interface of the original process, and added a new commandType. This operation can be triggered when the workflow fails and contains valid task instances that are forced to succeed.
Specifically, after the operation is triggered, all valid task instances in the previous execution process will be loaded, the entire processInstanceJson will be constructed into a dag, and the subsequent nodes that are forced to succeed can continue to be executed. Nodes such as sub-process and condition can also be supported.
The final state of the process is like this, for example:
A -> B -> C, A is successful, and B is forced to succeed. At this time, if C is executed successfully, the status of processInstance is success
A -> B -> C, at the same time A -> D -> E, A succeeds, B is forced to succeed, and D also fails. At this time, the operation will only trigger the operation of C. Even if the execution of C succeeds, the entire The status of processInstance is still failure, because E actually failed to get running. The user can also choose recovery failed or after forcing D to succeed, then trigger to continue running.
Modified the return value of taskStateCount in the original dataAnalysis. This interface is to count the number of tasks in various states under a project. Because there is extra forced success status, the return value of this interface has been modified.
Description of the feature
In general, mandatory success is divided into two sub-functions:
forced_success
. For tasks that fail in the workflow, you can manually modify their status toforced_successr
. If the workflow is still running, subsequent dependencies can continue to execute.resume_from_forced_success
. That is, when the workflow is stopped, if it contains a node that is forced to success, then the execution of the workflow can be resumed.Implementation of the feature
A new interface has been added, and the original two interfaces have been modified. For the interface documentation, see https://dsfts.w.eolinker.com/#/share/index?shareCode=GmFXMM
Specifically, directly modify the status of the failed task instance at the API-SERVER layer, and the log is recorded in the API-SERVER. With this as the front-end, there are two situations in the background:
If the workflow instance to which the task instance belongs is in a stopped state**:
You can continue to run down from the node where the forcing succeeded, see section "2" below.
If the corresponding workflow instance is running:
masterExecThread will continue to check in the database whether any failed tasks in the completeTaskList have been forced to succeed, and once detected, continue to sumbit the node after the task instance; For tasks with failed retry, if the task is currently in progress Within the retry interval, and then the user forces the last task instance to succeed, then the subsequent retry will not continue, and the subsequent node will be submitted instead.
Specifically, after the operation is triggered, all valid task instances in the previous execution process will be loaded, the entire processInstanceJson will be constructed into a dag, and the subsequent nodes that are forced to succeed can continue to be executed. Nodes such as sub-process and condition can also be supported.
The final state of the process is like this, for example:
新特性描述
总的而言,强制成功是分为了两个子功能:
强制成功过(forced_success)
。对于工作流中失败的任务,可以手动修改其状态为强制成功过
,若工作流仍在运行中,那么后续依赖可以继续执行。从强制成功过的节点恢复运行(resume_from_forced_success)
。即当工作流停止时,若其中包含被强制成功的节点,那么可以恢复工作流的执行。具体实现
新增了一个接口,修改了原有的两个接口,接口文档见https://dsfts.w.eolinker.com/#/share/index?shareCode=GmFXMM
具体而言,在API-SERVER层直接修改失败任务实例的状态,日志记录在API-SERVER中。以此为前置,后台有以下两种情况:
若该任务实例所属的工作流实例处于停止的状态:
可以从强制成功的结点继续往下运行,见下文“2”部分。
若对应的工作流实例正在运行:
masterExecThread会不断去数据库中检查completeTaskList中是否有失败的任务被强制成功了,一旦检测到就继续sumbit这个任务实例之后的结点;对于有失败重试的任务,如果现在正在该任务的重试间隔内,然后用户把上一次的任务实例强制成功了,那么接下来也不会继续重试了,转而submit后续结点。
具体而言,触发操作后,会载入之前执行过程中所有有效的任务实例,将整个processInstanceJson构建成dag,然后被强制成功的节点的后续就能继续得到执行了。对于sub-process和condition这样的节点也能得到支持。
最终的process的状态是这样的,比如:
除了上述而外,还修改了一些新增commandType会影响到的类。
The text was updated successfully, but these errors were encountered: