Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ImportError: No module named 'google.api.core' #3885

Closed
mikelambert opened this issue Aug 26, 2017 · 5 comments
Closed

ImportError: No module named 'google.api.core' #3885

mikelambert opened this issue Aug 26, 2017 · 5 comments
Assignees

Comments

@mikelambert
Copy link

I believe the recent release of google-cloud-datastore==1.3.0 broke any new installations for anyone who was using google-cloud==0.25 and google-cloud=0.26. In particular, those older versions have an overly broad version dependency on google-cloud-datastore, and 1.30 unfortunately depends on google.api.core, a 0.27-ism.

The original question (and my more detailed explanation) can be found here:
https://stackoverflow.com/questions/45880108/google-cloud-platform-importerror-no-module-named-google-api-core-on-deploy/45899998#45899998

This broke for me despite my attempt to specify my pip dependencies exactly, breaking my continuous build, new checkouts, and I assume could break new live machines too. I suspect you'll see an increasing amount of reports of this as people attempt new installs.

But I think the proper fix going forward might require revisiting versioning of projects and overly-broad dependencies and semver usage.

@lukesneeringer
Copy link
Contributor

We actually changed our practice on this because of exactly this problem, and we now depend on specific minor releases in the metapackage.

Additionally, we are working as fast as we can on getting the core package to 1.0 and not doing breaking changes in it going further, because this has been such a consistent problem. We are hoping to be at that point in a month or so.

@lukesneeringer
Copy link
Contributor

lukesneeringer commented Aug 28, 2017

@mikelambert While I unreservedly acknowledge the original problem (#3579), I obviously can not do anything to change dependencies of past releases. Is there something else you assert we should be doing other than what was outlined and done in the fixes to that issue?

@mikelambert
Copy link
Author

I was originally posting this here, as a reference to others who might encounter this problem. Apologies if it's not obviously actionable.

As far as past-releases...obviously in hindsight, you could have bumped google-cloud-datastore to 2.0 instead of 1.3, just to avoid old versions depending on it. It seems like you knew things would break when you did 569e739 (ie, the need to bound version dependencies), so at that point you should have also bump datastore to 2.0 to ensure 0.25/0.26 were stable.

Another approach (given that it's only been a few days since you released 1.3), is to:

  • release google-cloud-datastore==1.3.1, that is basically the same as the last-known-working-version (thus fixing 0.25, 0.26, etc)
  • release google-cloud-datastore==2.0, that is basically the same as 1.3. And do it alongside a google-cloud==0.27.1 release that bumps the datastore dependency to >2.0, <2.1.

However, I'm not sure if the annoyance of this release switcharoo is justified. When I worked in google-production, if someone broke borg/mpm installs of existing packages/builds, we would have demanded this solution (instead of asking everyone else to "just upgrade"). But I have no idea how many google-cloud users there are, how many have pinned dependencies that will be upset at the breakage, etc.

In my case, I just upgraded to 0.27, and things are working for me. I can imagine that enterprise-level customers or serious businesses will not take "just upgrade!" advice so lightly, and will want something safer. But I can't speak for them...if this kind of breakage is common in google-cloud release versioning, then maybe customers are already used to these pains?

So yeah, that's your (or your PM's) call. :)

@VikramTiwari
Copy link
Contributor

I agree with @mikelambert's sentiment here. "Just upgrade" is fine for small teams but not when large and complex systems are dependent on it. Python universe is filled with 0.x packages hence by definition it gives you the power to do breaking changes, but unless you want people to stop using anything in 0.x you should be more considerate of the release schedule.

Also, this is not the first time this has happened. Maybe opening an issue with depreciation schedule would be helpful, but the best approach would be to follow semver properly.

@lukesneeringer
Copy link
Contributor

Apologies if it's not obviously actionable.

No apology necessary. It's our mistake. :-)

Also, this is not the first time this has happened. Maybe opening an issue with depreciation schedule would be helpful, but the best approach would be to follow semver properly.

The problem in this case is that semver is being properly followed in each specific package; the pain is that we tried to have "GA" packages that depend on "beta" libraries, and in hindsight that was a pretty significant mistake. (I was concerned about this for philosophical reasons originally, but I had no idea what the practical ramifications would be.)

In recognition of the issue, we are actively trying to get all the dependencies to GA, which will alleviate the issue going forward. As @VikramTiwari states, this is not the first time that this has happened, and this is pain that we need to avoid in the future.

I am going to go ahead and close this issue out, as the only way I can do much more about it is to do the pretty significant shuffle that @mikelambert suggests. It is tempting, but I am also concerned that it would be pretty error-prone. We will also avoid doing any more significant releases on the GA packages until we do the final update of core to 1.0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants