Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bucket information is sometimes unavailable from api instance #266

Open
ehossack opened this issue Jun 16, 2021 · 1 comment
Open

Bucket information is sometimes unavailable from api instance #266

ehossack opened this issue Jun 16, 2021 · 1 comment

Comments

@ehossack
Copy link
Contributor

I have observed application behaviour wherein I'm unable to determine if my bucket is private or not.
If I look at the bucket.as_dict() I can see that "bucketType" as a property is missing.

{'accountId': 'ececece343434', 'bucketId': 'aoeuaoeuaoeu', 'bucketName': 'some-public-bucket', 'bucketInfo': {}, 'corsRules': [], 'lifecycleRules': [], 'revision': None, 'options': set(), 'defaultServerSideEncryption': {'mode': None}, 'isFileLockEnabled': None, 'defaultRetention': {'mode': 'unknown'}}

This is confusing because when retrieving a server response, it looks like this information is checked: https://github.com/Backblaze/b2-sdk-python/blob/v1.9.0/b2sdk/bucket.py#L970

Yet from the cache, it's constructed with solely id and name: https://github.com/Backblaze/b2-sdk-python/blob/v1.9.0/b2sdk/api.py#L325

This leads to an inconsistent experience when interacting with a bucket instance as pulled from the api.

Perhaps the entire bucket dict should be cached? Alternatively that data retrieved on-demand? Not sure.

For now my workaround is to call:
b2_api.session.cache.clear() and retry the call to get_bucket to refresh the data, although I'll have to leave it for some time to see if it fully works.

@ppolewicz
Copy link
Collaborator

The thing is that a Bucket and FileVersion (and large file upload state?) objects represents a state of a given entity in a given time. That state may change later though and the client doesn't know when the change occurs.

The client application may want to use the old cached entity or they might want to use a copy retrieved from the server a few milliseconds ago (which again makes it a snapshot which may be outdated).

One issue is that we are not caching some stuff about the bucket (and caching the entire dict in a jsonfield or blob field may be a solution for this), but the other is that, I think, we should have a Bucket.get_fresh_state() -> Bucket and FileVersion.get_fresh_state() -> FileVersion which would allow the user to get a new snapshot when they need it (regardless of whether the cache contains all properties of a bucket or not). In the future we can imagine that new properties of a file or bucket will become available (object lock and encryption were added recently), so the client code will always need to be ready to face a stale cache which simply was written down in some old times when the new property was not supported yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants