-
Notifications
You must be signed in to change notification settings - Fork 16.3k
Description
Apache Airflow version
3.1.3
If "Other Airflow 2/3 version" selected, which one?
No response
What happened?
When you clear a dag run, it changes the state to QUEUED which should set the queued_at timestamp to the newly-re-queued time, but for some reason it isn't. I submitted #59066 but that only fixed it when the run is in SUCCESS or FAILURE state, if you clear a RUNNING run, it still doesn't work. The discussion in that PR also implies that there might be something deeper going on with SQLAlchemy which my "fix" may possibly just be plastering over instead of actually fixing, and that may be worth a look.
I might get to this one, but I am trying to wrap a few other things up before the holidays, so if someone gets to it before me, that would be great.
What you think should happen instead?
When you clear a dag run, it changes the state to QUEUED which should set the queued_at timestamp to the newly-re-queued time
How to reproduce
I tested this by running a dag like this:
with DAG(dag_id="really_long_dag"):
BashOperator(task_id='sleep_task', bash_command='sleep 10000')
Run the dag and check the queued_at time in the database. You can use psql in the Breeze environment or perhaps your IDE has a database monitoring connection; I know PyCharm does.
Clear the run (I used the UI) and the run state will flash though QUEUED and to RUNNING, but the queued_at timestamp will remain unchanged.
Wait for the dag to finish or force it to a terminal state and clear it again. You'll see it flash through the state change again, but this time the queued_at time in the db will have updated.
Operating System
linux
Versions of Apache Airflow Providers
No response
Deployment
Other
Deployment details
No response
Anything else?
No response
Are you willing to submit PR?
- Yes I am willing to submit a PR!
Code of Conduct
- I agree to follow this project's Code of Conduct