Skip to content

Conversation

@steveloughran
Copy link
Contributor

@steveloughran steveloughran commented Apr 26, 2024

Code for #12055

Using the bulk delete for files eliminates the per-file probe for status of the destination object, an issuing of a single delete request and then a probe to see if we need to recreate an empty directory marker above it, moving to deleting a few hundred objects in a single request at a time.

Tested: S3 london

In apache/hadoop#7316 there's an (uncommitted) PR for hadoop which takes an iceberg library and verifies the correct operation of the iceberg delete operations
against a live AWS S3 store with- and without- bulk delete enabled.

https://github.com/apache/hadoop/blob/d37310cf355f3eb137f925bde9a2a299823b8230/hadoop-tools/hadoop-aws/src/test/java17/org/apache/hadoop/fs/contract/s3a/ITestIcebergBulkDelete.java

This is something we can merge into hadoop as a local regression test once Iceberg has a release with this.

for (String path : pathnames) {
Path p = new Path(path);
final URI uri = p.toUri();
String fsURI = uri.getScheme() + "://" + uri.getHost() + "/";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using the URI class has some significant incompatibilities for various cloud providers like GCP who allow non-standard characters (e.g. underscore in bucket name) which will result in the host being empty/null and causing problems. We use classes like GCSLocation or S3URI to workaround these issues. We should try to avoid use of URI

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so we should just use the root path of every fs as the key? It uses the uri as hash code and comparator internally.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm using the root path of every filesystem instance; relies on the Path type's internal URI stuff. There are some s3 bucket names that aren't valid hostnames, for those the s3a policy is "not supported". dots in bucket names are a key example. Even amazon advise against those

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@danielcweeks quick resolution for this. FileSystem.get() uses that root path URI to look up filesystems from its cache.

Non-standard hostnames which can't be converted to a URI are probably not work through HadoopFileIO today.

@steveloughran
Copy link
Contributor Author

This PR is in sync with apache/hadoop#6686 ; the dynamic binding classes in here are from that PR (which also copies in the parquet DynMethods classes for consistency everywhere).

@steveloughran steveloughran changed the title HADOOP-18679. Add API for bulk/paged object deletion: Iceberg PoC [draft] HADOOP-18679. Add API for bulk/paged object deletion: Iceberg PoC May 30, 2024
@steveloughran steveloughran force-pushed the s3/HADOOP-18679-bulk-delete-api branch from 64423e2 to e794b1b Compare June 26, 2024 14:17
@steveloughran
Copy link
Contributor Author

the latest update runs the tests against local fs paramterized on using/not using bulk delete -the library settings have been modified to use hadoop 3.4.1-SNAPSHOT for this.

it works, and execution time on large file deletion (added to the listPrefix test) is comparable, but there is now two codepaths to test and maintain.

I plan to coalesce them so test coverage is better, even on older builds

@steveloughran
Copy link
Contributor Author

  • going to create a separate test repository to run an iceberg build against s3/abfs etc with a different hadoop build. this will do end to end testing.

@steveloughran steveloughran force-pushed the s3/HADOOP-18679-bulk-delete-api branch from e794b1b to 026ca47 Compare August 13, 2024 12:32
@steveloughran steveloughran force-pushed the s3/HADOOP-18679-bulk-delete-api branch from 5839ca1 to 968809c Compare October 22, 2024 17:00
@github-actions
Copy link

This pull request has been marked as stale due to 30 days of inactivity. It will be closed in 1 week if no further activity occurs. If you think that’s incorrect or this pull request requires a review, please simply write any comment. If closed, you can revive the PR at any time and @mention a reviewer or discuss it on the dev@iceberg.apache.org list. Thank you for your contributions.

@github-actions github-actions bot added the stale label Nov 22, 2024
@steveloughran steveloughran force-pushed the s3/HADOOP-18679-bulk-delete-api branch from 968809c to fb10c1a Compare November 27, 2024 22:07
@github-actions github-actions bot removed the stale label Nov 28, 2024
@github-actions
Copy link

This pull request has been marked as stale due to 30 days of inactivity. It will be closed in 1 week if no further activity occurs. If you think that’s incorrect or this pull request requires a review, please simply write any comment. If closed, you can revive the PR at any time and @mention a reviewer or discuss it on the dev@iceberg.apache.org list. Thank you for your contributions.

@steveloughran steveloughran force-pushed the s3/HADOOP-18679-bulk-delete-api branch from dcb38e6 to b352b04 Compare January 2, 2025 11:51
@github-actions github-actions bot removed the stale label Jan 3, 2025
@steveloughran steveloughran changed the title [draft] HADOOP-18679. Add API for bulk/paged object deletion: Iceberg PoC [draft] AWS: support hadoop bulk delete API Jan 3, 2025
@steveloughran steveloughran force-pushed the s3/HADOOP-18679-bulk-delete-api branch from b352b04 to 661ddc6 Compare January 9, 2025 13:29
@steveloughran
Copy link
Contributor Author

For anyone watching this, there's a full integration test suite in the hadoop test code: apache/hadoop#7285

All is good, though as it's the first java17 code and depends on an iceberg jar with this patch in, it's actually dependent on this PR going in first. It does show that yes, bulk deletes through the S3A code does work, without me having to add a significant piece of test code to iceberg to wire up hadoop fileIO to the minio docker container that S3FileIO uses.
It also makes for a good regression test: anything downstream of both will get neglected.

Anyway, with those tests happy, I just have a few more tests to write (mixing some local files too) and then I'll be confident this is ready for review

@steveloughran steveloughran changed the title [draft] AWS: support hadoop bulk delete API HadoopFileIO to support bulk delete through the Hadoop Filesystem APIs Jan 22, 2025
@steveloughran steveloughran marked this pull request as ready for review January 22, 2025 19:50
@steveloughran steveloughran marked this pull request as draft January 22, 2025 19:54
@github-actions
Copy link

This pull request has been marked as stale due to 30 days of inactivity. It will be closed in 1 week if no further activity occurs. If you think that’s incorrect or this pull request requires a review, please simply write any comment. If closed, you can revive the PR at any time and @mention a reviewer or discuss it on the dev@iceberg.apache.org list. Thank you for your contributions.

@github-actions github-actions bot added the stale label Jun 30, 2025
@github-actions github-actions bot removed the stale label Jul 3, 2025
@steveloughran steveloughran force-pushed the s3/HADOOP-18679-bulk-delete-api branch 2 times, most recently from b02a9e1 to cc51900 Compare July 4, 2025 13:19
Not compiling as branch currently is hadoop 2.7; fix first
Test classpath, bulk deletion Invocation, fallback and testing

* Remove off test classpath any yarn dependencies which hive pulls in.
  These taint the classpath with older libraries.

* Hardening the fallback such that the list of undeleted paths
  and their failure causes are passed back up -those are the elements
  which are then passed down to the classic iteration algorithm.

* Tests for invocation validate fallback on non-empty directory deletion
  and verify that the behavior is the same on classic and bulk API paths.

+ tests for: empty directory, empty list

Bulk delete: move directly to API, no option to disable.

Not compiling as branch currently is hadoop 2.7; fix first

Extra mocking resilience

if the bulk delete invocation didn't link (e.g. from mocking),
then bulk delete is disabled for the life of this
HadoopFileIO instance

Make resilient to mocking.

AWS: Use hadoop bulk delete API where available

Reflection-based used of Hadoop 3.4.1+ BulkDelete API so that
S3 object deletions can be done in pages of objects, rather
than one at a time.

Configuration option "iceberg.hadoop.bulk.delete.enabled" to switch
to bulk deletes.

There's a unit test which will turn this on if the wrapped APIs
are loaded and probe the HadoopFileIO instance for it using
the APIs.

* Parameterized tests for bulk delete on/off
* Cache bulk delete page size; this brings performance of bulk delete
   with page size == 1 to that of single delete
* Use deleteFiles() in tests which create many files; helps highlight
  performance/scale issues against local fs.

+ updates aws.md to cover this and other relevant s3a settings,
  primarily for maximum parquet IO
The changes made earlier to the hadoop exclusions should ensure
that no artifacts of earlier releases get onto the test classpath
of other modules.
@steveloughran steveloughran force-pushed the s3/HADOOP-18679-bulk-delete-api branch from cc51900 to bd51dc5 Compare July 7, 2025 15:47
@steveloughran
Copy link
Contributor Author

not sure what went up with flink; rebasing and testing locally to make sure that it is unrelated to my PR.

@steveloughran
Copy link
Contributor Author

failure is because spark 3.4 runtimes are using older hadoop releases.

@github-actions github-actions bot added the spark label Jul 30, 2025
@steveloughran
Copy link
Contributor Author

@danielcweeks so here we are

  1. no reflection
  2. forced update of hadoop version on spark 3.4 and 3.5.

That forced update isn't great, but if iceberg compiles with hadoop 3.4.1 then it's already possibly broken somewhere else when run there. We put a lot of effort into backward compatibility of APIs, but its really hard to do forward compatibility (openFile(), createFile() and other builders are the best we have here.

A real risk is some overloaded method compiling down differently, different exception signatures etc.

That's independ of this PR, which simply finds the problem faster.

Ignoring that fundamental issue, I could tweak this one to look for the presence of the new API classes via a loadResource call, and don't attempt bulk delete if not found. Provided all uses of the bulk delete is isolated to guarded method, this wouldn't exacerbate the existing issue.

How to test if that worked? remove the forced dependency updates from the spark versions.

Thoughts? It's easily done, and isn't the full reflection game, just a probe and a guard

@danielcweeks danielcweeks self-requested a review August 11, 2025 15:38
@github-actions
Copy link

This pull request has been marked as stale due to 30 days of inactivity. It will be closed in 1 week if no further activity occurs. If you think that’s incorrect or this pull request requires a review, please simply write any comment. If closed, you can revive the PR at any time and @mention a reviewer or discuss it on the dev@iceberg.apache.org list. Thank you for your contributions.

@github-actions github-actions bot added the stale label Sep 11, 2025
@github-actions
Copy link

This pull request has been closed due to lack of activity. This is not a judgement on the merit of the PR in any way. It is just a way of keeping the PR queue manageable. If you think that is incorrect, or the pull request requires review, you can revive the PR at any time.

@github-actions github-actions bot closed this Sep 19, 2025
@steveloughran
Copy link
Contributor Author

aah, this got closed while I was off on a european-length vacation. PITA.

@nastra
Copy link
Contributor

nastra commented Sep 23, 2025

@steveloughran PRs get auto-closed after a certain period but you can always revive it

@nastra nastra reopened this Sep 23, 2025
@steveloughran
Copy link
Contributor Author

@nastra thanks. I think I'll split the "move everything to hadoop 3.4.1 libs" from the code changes, so that build changes which highlight spark 3.4 issues.

@github-actions github-actions bot removed the stale label Sep 24, 2025
@steveloughran steveloughran marked this pull request as draft September 30, 2025 19:01
@github-actions
Copy link

This pull request has been marked as stale due to 30 days of inactivity. It will be closed in 1 week if no further activity occurs. If you think that’s incorrect or this pull request requires a review, please simply write any comment. If closed, you can revive the PR at any time and @mention a reviewer or discuss it on the dev@iceberg.apache.org list. Thank you for your contributions.

@github-actions github-actions bot added the stale label Oct 31, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants