-
Notifications
You must be signed in to change notification settings - Fork 760
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐛 BUG: nonactionable error about migration #705
Comments
Oddly.. if I try multiple times, I do get the publish to exit without an error: $ wrangler publish
⛅️ wrangler 0.0.0-52ea60f
---------------------------
✘ [ERROR] Received a bad response from the API
Migration tag precondition failed; current tag is v1 [code: 10079]
If you think this is a bug, please open an issue at:
https://github.com/cloudflare/wrangler2/issues/new
$ wrangler publish
⛅️ wrangler 0.0.0-52ea60f
---------------------------
✘ [ERROR] Received a bad response from the API
Migration tag precondition failed; current tag is v1 [code: 10079]
If you think this is a bug, please open an issue at:
https://github.com/cloudflare/wrangler2/issues/new
$ wrangler publish
⛅️ wrangler 0.0.0-52ea60f
---------------------------
Uploaded request-counter (4.09 sec)
Published request-counter (4.51 sec)
request-counter.swoosh.workers.dev
* * * * * |
That error is straight from the api, but even I'm not sure what it's indicating; your |
Had you previously pushed a DO with a different migration for this Worker? Perhaps a different class name? |
In the config service code base we have:
which is what is causing this error. So I believe that you must have already used this tag in an other migration, which you have changed? |
I think i might have found some bugs with the way we do migrations, I'll work on it today and add some test coverage. |
I'm a bit puzzled about what went wrong here. I've never used any migration except for the v1 "new_classes" migration, which is required for me to deploy the DO. Initially I deployed the DO successfully (after being asked to add this new_classes migration). It was the subsequent |
I'll have a fix for this later today. The api appears to have changed a little behaviour, and I have a fix for that. More concerning, like you mentioned, is that it's not a uniform error. I'll follow up on that internally. |
We had a bug where even if you'd published a script with migrations, we would still send a blank set of migrations on the next round. The api doesn't accept this, so the fix is to not do so. I also expanded test coverage for migrations. Fixes #705
We had a bug where even if you'd published a script with migrations, we would still send a blank set of migrations on the next round. The api doesn't accept this, so the fix is to not do so. I also expanded test coverage for migrations. Fixes #705
We had a bug where even if you'd published a script with migrations, we would still send a blank set of migrations on the next round. The api doesn't accept this, so the fix is to not do so. I also expanded test coverage for migrations. Fixes #705
We had a bug where even if you'd published a script with migrations, we would still send a blank set of migrations on the next round. The api doesn't accept this, so the fix is to not do so. I also expanded test coverage for migrations. Fixes #705
We had a bug where even if you'd published a script with migrations, we would still send a blank set of migrations on the next round. The api doesn't accept this, so the fix is to not do so. I also expanded test coverage for migrations. Fixes #705
We had a bug where even if you'd published a script with migrations, we would still send a blank set of migrations on the next round. The api doesn't accept this, so the fix is to not do so. I also expanded test coverage for migrations. Fixes #705
We previously waited for Miniflare to be ready before `dispose()`ing. Unfortunately, we weren't waiting for the `workerd` config to finish being written to stdin. Calling `dispose()` immediately after `new Miniflare()` would stop waiting for socket ports to be reported, and kill the `workerd` process while data was still being written. This threw an unhandled `EPIPE` error. This changes makes sure we don't report that Miniflare is ready until after the config is fully-written. Closes #680
We previously waited for Miniflare to be ready before `dispose()`ing. Unfortunately, we weren't waiting for the `workerd` config to finish being written to stdin. Calling `dispose()` immediately after `new Miniflare()` would stop waiting for socket ports to be reported, and kill the `workerd` process while data was still being written. This threw an unhandled `EPIPE` error. This changes makes sure we don't report that Miniflare is ready until after the config is fully-written. Closes #680
We previously waited for Miniflare to be ready before `dispose()`ing. Unfortunately, we weren't waiting for the `workerd` config to finish being written to stdin. Calling `dispose()` immediately after `new Miniflare()` would stop waiting for socket ports to be reported, and kill the `workerd` process while data was still being written. This threw an unhandled `EPIPE` error. This changes makes sure we don't report that Miniflare is ready until after the config is fully-written. Closes #680
We previously waited for Miniflare to be ready before `dispose()`ing. Unfortunately, we weren't waiting for the `workerd` config to finish being written to stdin. Calling `dispose()` immediately after `new Miniflare()` would stop waiting for socket ports to be reported, and kill the `workerd` process while data was still being written. This threw an unhandled `EPIPE` error. This changes makes sure we don't report that Miniflare is ready until after the config is fully-written. Closes #680
What version of
Wrangler
are you using?0.0.0-52ea60f
What operating system are you using?
mac
Describe the Bug
I tried to publish my worker with a DO and I got this error that puzzles me... I have no idea what to do about this:
$ wrangler publish ⛅️ wrangler 0.0.0-52ea60f --------------------------- ✘ [ERROR] Received a bad response from the API Migration tag precondition failed; current tag is v1 [code: 10079]
wrangler.toml
Could we improve the error message to be actionable?
The text was updated successfully, but these errors were encountered: