You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
feat!: Support only sending a subset of the fields to OPA (#49)
* WIP: First implementation
* refactor: Use dynamic dispatch (something something Java :))
* bump to HDFS 3.4.0, as it's the LTS now
* Update deps
* changelog and docs
* fix: Set path to `/` when the operation `contentSummary` is called on `/`i
* changelog
* changelog
* Add benchmark shell
* linter
* refactor: Making random stuff final
* Update README.md
Co-authored-by: Lars Francke <git@lars-francke.de>
* docs: Document reduced API call
* markdown linter
* Try silencing markdown linter
* Update src/main/java/tech/stackable/hadoop/StackableAccessControlEnforcer.java
Co-authored-by: Lars Francke <git@lars-francke.de>
* Update README.md
Co-authored-by: Lars Francke <git@lars-francke.de>
* Update benchmark to use nested folder
---------
Co-authored-by: Lars Francke <git@lars-francke.de>
Copy file name to clipboardExpand all lines: CHANGELOG.md
+11-2Lines changed: 11 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,11 +6,20 @@ All notable changes to this project will be documented in this file.
6
6
7
7
### Changed
8
8
9
-
- Bump okio to 1.17.6 to get rid of CVE-2023-3635 ([#46])
9
+
- BREAKING: Only send a subset of the fields sufficient for most use-cases to OPA for performance reasons.
10
+
The old behavior of sending all fields can be restored by setting `hadoop.security.authorization.opa.extended-requests`
11
+
to `true` ([#49]).
10
12
- Performance fixes ([#50])
11
-
- Updates various dependencies and does a full spotless run. This will now require JDK 17 or later to build (required by later error-prone versions), the build target is still Java 11 [#51]
13
+
- Updates various dependencies and do a full spotless run. This will now require JDK 17 or later to build
14
+
(required by later error-prone versions), the build target is still Java 11 [#51]
15
+
- Bump okio to 1.17.6 to get rid of CVE-2023-3635 ([#46])
16
+
17
+
### Fixed
18
+
19
+
- Set path to `/` when the operation `contentSummary` is called on `/`. Previously path was set to `null` ([#49]).
Copy file name to clipboardExpand all lines: README.md
+40-2Lines changed: 40 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,13 +26,51 @@ The Stackable HDFS already takes care of this, you don't need to do anything in
26
26
27
27
- Set `dfs.namenode.inode.attributes.provider.class` in `hdfs-site.xml` to `tech.stackable.hadoop.StackableAuthorizer`
28
28
- Set `hadoop.security.authorization.opa.policy.url` in `core-site.xml` to the HTTP endpoint of your OPA rego rule, e.g. `http://opa.default.svc.cluster.local:8081/v1/data/hdfs/allow`
29
+
- The property `hadoop.security.authorization.opa.extended-requests` (defaults to `false`) controls if all fields (`true`) should be sent to OPA or only a subset
30
+
Sending all fields degrades the performance, but allows for more advanced authorization.
29
31
30
32
### API
31
33
32
-
For every action a request similar to the following is sent to OPA:
34
+
By default for every HDFS action a request similar to the following is sent to OPA:
The contained details should be sufficient for most use-cases.
67
+
However, if you need access to all the provided information from the `INodeAttributeProvider.AccessControlEnforcer` interface, you can instruct hdfs-utils to send all fields by setting `hadoop.security.authorization.opa.extended-requests` to `true`.
68
+
However, please note that this results in very big JSON objects being send from HDFS to OPA, so please keep an eye on performance degradations.
69
+
70
+
The following example provides an extend request sending all available fields:
0 commit comments