Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Improvement] Verify and test whether PySpark can access fileset with cloud storage. #5585

Closed
yuqi1129 opened this issue Nov 15, 2024 · 0 comments · Fixed by #5806
Closed
Assignees
Labels
0.8.0 Release v0.8.0 improvement Improvements on everything

Comments

@yuqi1129
Copy link
Contributor

What would you like to be improved?

This issue has the following targets:

  • Whether filesets with cloud storage can be used in PySpark
  • How to configure using PySpark access filesets
  • Fix possible problem it exists.

How should we improve?

No response

@yuqi1129 yuqi1129 added the improvement Improvements on everything label Nov 15, 2024
@yuqi1129 yuqi1129 changed the title [Improvement] Verify and test whether PySpark can use fileset with cloud storage. [Improvement] Verify and test whether PySpark can access fileset with cloud storage. Nov 15, 2024
@jerryshao jerryshao added the 0.8.0 Release v0.8.0 label Dec 27, 2024
Abyss-lord pushed a commit to Abyss-lord/gravitino that referenced this issue Dec 29, 2024
…core jars that does not contains hadoop-{aws,gcp,aliyun,azure} (apache#5806)

### What changes were proposed in this pull request?

Provide another kind of bundle jars that does not contains
hadoop-{aws,gcp,aliyun,azure} like aws-mini, gcp-mini.

### Why are the changes needed?

To make it works in a wide range of Hadoop version


Fix: apache#5585 

### Does this PR introduce _any_ user-facing change?

N/A

### How was this patch tested?

Existing UTs and ITs
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment