Tags: lancedb/lance
Tags
feat: add the s3 retry config options for storage option (#3268) Add `client_max_retries` and `client_retry_timeout` of `RetryConfig` for S3 client. If there are some server error of object store server, the `object store` module of `arrow-rs` will retry `client_max_retries` times and also the total execute time is not over `client_retry_timeout`. Closes #3182 --------- Co-authored-by: Will Jones <willjones127@gmail.com>
fix: correctly copy null buffer when making deep copy (#3238) In some situations an array could be sliced in such a way that the array had no offset, but the array's null buffer did have an offset. In these cases we were not deep copying the array correctly and the offset of the null buffer was lost. This does mean, in some cases, the 2.0 writer could write incorrect nulls. However, the input conditions would mean that the user's data would have to originate from rust in such a way that it was sliced like this. It would be impossible for batches from the C data interface or from python to look like this.
ci: update python/Cargo.log on version bump (#3207) When we create the version bump commit it currently updates the lock file `Cargo.lock` to point to the new versions. I suspect it is the `cargo ws version --no-git-commit -y --exact --force 'lance*' ${{ inputs.part }}` command that does this. However, we have two lock files, and `python/Cargo.lock` is not updated. This PR adds a step to the version bump to also update `python/Cargo.lock`.
ci: add benchmark suite (#3165) Benchmarks report to https://bencher.dev/console/projects/weston-lancedb/plots At some point it may be nice for these to be used for regression detection in PRs. However, we need to get a stable baseline first. These benchmarks rely on a private runner hosted by LanceDB and some private datasets. They run against GCS. It would be good to get some NVME & Azure & S3 benchmarks at some point.
PreviousNext