Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛 BUG: nonactionable error about migration #705

Closed
IgorMinar opened this issue Mar 26, 2022 · 7 comments · Fixed by #728 or #721
Closed

🐛 BUG: nonactionable error about migration #705

IgorMinar opened this issue Mar 26, 2022 · 7 comments · Fixed by #728 or #721

Comments

@IgorMinar
Copy link
Contributor

What version of Wrangler are you using?

0.0.0-52ea60f

What operating system are you using?

mac

Describe the Bug

I tried to publish my worker with a DO and I got this error that puzzles me... I have no idea what to do about this:

$ wrangler publish
 ⛅️ wrangler 0.0.0-52ea60f 
---------------------------

✘ [ERROR] Received a bad response from the API

  Migration tag precondition failed; current tag is v1 [code: 10079]

wrangler.toml

name = "request-counter"
main = "src/index.ts"
compatibility_date = "2022-03-25"

[[durable_objects.bindings]]
name = "doRequestCounter"
class_name = "DurableRequestCounter"

[[migrations]]
tag = "v1"
new_classes = ["DurableRequestCounter"]

Could we improve the error message to be actionable?

@IgorMinar IgorMinar added the bug label Mar 26, 2022
@IgorMinar
Copy link
Contributor Author

Oddly.. if I try multiple times, I do get the publish to exit without an error:

$ wrangler publish
 ⛅️ wrangler 0.0.0-52ea60f 
---------------------------

✘ [ERROR] Received a bad response from the API

  Migration tag precondition failed; current tag is v1 [code: 10079]
  
  If you think this is a bug, please open an issue at:
  https://github.com/cloudflare/wrangler2/issues/new


$ wrangler publish
 ⛅️ wrangler 0.0.0-52ea60f 
---------------------------

✘ [ERROR] Received a bad response from the API

  Migration tag precondition failed; current tag is v1 [code: 10079]
  
  If you think this is a bug, please open an issue at:
  https://github.com/cloudflare/wrangler2/issues/new


$ wrangler publish
 ⛅️ wrangler 0.0.0-52ea60f 
---------------------------
Uploaded request-counter (4.09 sec)
Published request-counter (4.51 sec)
  request-counter.swoosh.workers.dev
  * * * * *

@threepointone
Copy link
Contributor

That error is straight from the api, but even I'm not sure what it's indicating; your wrangler.toml looks fine to me

@petebacondarwin
Copy link
Contributor

Had you previously pushed a DO with a different migration for this Worker? Perhaps a different class name?
Can you look in the dashboard to see if there is a DO defined for this Worker?

@petebacondarwin
Copy link
Contributor

In the config service code base we have:

	// old tag provided but different

	if existing != nil && actorMigrations != nil && actorMigrations.OldTag != existing.MigrationTag {
		return "", errors.WithStack(api.ActorTagPreconditionError(existing.MigrationTag))
	}

which is what is causing this error.

So I believe that you must have already used this tag in an other migration, which you have changed?

@threepointone
Copy link
Contributor

I think i might have found some bugs with the way we do migrations, I'll work on it today and add some test coverage.

@IgorMinar
Copy link
Contributor Author

IgorMinar commented Mar 29, 2022

I'm a bit puzzled about what went wrong here. I've never used any migration except for the v1 "new_classes" migration, which is required for me to deploy the DO.

Initially I deployed the DO successfully (after being asked to add this new_classes migration). It was the subsequent publish invocations that failed (in spite of me not making any changes that would require any additional migration). But they failed only sometimes, which suggests that there is some kind of race condition or caching issue upstream.

@threepointone
Copy link
Contributor

I'll have a fix for this later today. The api appears to have changed a little behaviour, and I have a fix for that. More concerning, like you mentioned, is that it's not a uniform error. I'll follow up on that internally.

threepointone added a commit that referenced this issue Mar 30, 2022
We had a bug where even if you'd published a script with migrations, we would still send a blank set of migrations on the next round. The api doesn't accept this, so the fix is to not do so. I also expanded test coverage for migrations.

Fixes #705
threepointone added a commit that referenced this issue Mar 30, 2022
We had a bug where even if you'd published a script with migrations, we would still send a blank set of migrations on the next round. The api doesn't accept this, so the fix is to not do so. I also expanded test coverage for migrations.

Fixes #705
threepointone added a commit that referenced this issue Mar 31, 2022
We had a bug where even if you'd published a script with migrations, we would still send a blank set of migrations on the next round. The api doesn't accept this, so the fix is to not do so. I also expanded test coverage for migrations.

Fixes #705
threepointone added a commit that referenced this issue Mar 31, 2022
We had a bug where even if you'd published a script with migrations, we would still send a blank set of migrations on the next round. The api doesn't accept this, so the fix is to not do so. I also expanded test coverage for migrations.

Fixes #705
threepointone added a commit that referenced this issue Mar 31, 2022
We had a bug where even if you'd published a script with migrations, we would still send a blank set of migrations on the next round. The api doesn't accept this, so the fix is to not do so. I also expanded test coverage for migrations.

Fixes #705
petebacondarwin pushed a commit that referenced this issue Mar 31, 2022
We had a bug where even if you'd published a script with migrations, we would still send a blank set of migrations on the next round. The api doesn't accept this, so the fix is to not do so. I also expanded test coverage for migrations.

Fixes #705
mrbbot added a commit that referenced this issue Oct 31, 2023
We previously waited for Miniflare to be ready before `dispose()`ing.
Unfortunately, we weren't waiting for the `workerd` config to finish
being written to stdin. Calling `dispose()` immediately after
`new Miniflare()` would stop waiting for socket ports to be reported,
and kill the `workerd` process while data was still being written.
This threw an unhandled `EPIPE` error.

This changes makes sure we don't report that Miniflare is ready until
after the config is fully-written.

Closes #680
mrbbot added a commit that referenced this issue Nov 1, 2023
We previously waited for Miniflare to be ready before `dispose()`ing.
Unfortunately, we weren't waiting for the `workerd` config to finish
being written to stdin. Calling `dispose()` immediately after
`new Miniflare()` would stop waiting for socket ports to be reported,
and kill the `workerd` process while data was still being written.
This threw an unhandled `EPIPE` error.

This changes makes sure we don't report that Miniflare is ready until
after the config is fully-written.

Closes #680
mrbbot added a commit that referenced this issue Nov 1, 2023
We previously waited for Miniflare to be ready before `dispose()`ing.
Unfortunately, we weren't waiting for the `workerd` config to finish
being written to stdin. Calling `dispose()` immediately after
`new Miniflare()` would stop waiting for socket ports to be reported,
and kill the `workerd` process while data was still being written.
This threw an unhandled `EPIPE` error.

This changes makes sure we don't report that Miniflare is ready until
after the config is fully-written.

Closes #680
mrbbot added a commit that referenced this issue Nov 1, 2023
We previously waited for Miniflare to be ready before `dispose()`ing.
Unfortunately, we weren't waiting for the `workerd` config to finish
being written to stdin. Calling `dispose()` immediately after
`new Miniflare()` would stop waiting for socket ports to be reported,
and kill the `workerd` process while data was still being written.
This threw an unhandled `EPIPE` error.

This changes makes sure we don't report that Miniflare is ready until
after the config is fully-written.

Closes #680
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants