Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Merged by Bors] - openshift compatibility #225

Closed
wants to merge 5 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,10 @@ All notable changes to this project will be documented in this file.
### Changed

- Include chart name when installing with a custom release name ([#205])
- Added OpenShift compatiblity ([#225])

[#205]: https://github.com/stackabletech/hdfs-operator/pull/205
[#225]: https://github.com/stackabletech/hdfs-operator/pull/225

## [0.4.0] - 2022-06-30

Expand All @@ -30,7 +32,6 @@ All notable changes to this project will be documented in this file.
- `HADOOP_OPTS` for jmx exporter specified to `HADOOP_NAMENODE_OPTS`, `HADOOP_DATANODE_OPTS` and `HADOOP_JOURNALNODE_OPTS` to fix cli tool ([#148]).
- [BREAKING] Specifying the product version has been changed to adhere to [ADR018](https://docs.stackable.tech/home/contributor/adr/ADR018-product_image_versioning.html) instead of just specifying the product version you will now have to add the Stackable image version as well, so `version: 3.5.8` becomes (for example) `version: 3.5.8-stackable0.1.0` ([#180])


[#122]: https://github.com/stackabletech/hdfs-operator/pull/122
[#130]: https://github.com/stackabletech/hdfs-operator/pull/130
[#134]: https://github.com/stackabletech/hdfs-operator/pull/134
Expand Down
80 changes: 80 additions & 0 deletions deploy/helm/hdfs-operator/templates/roles.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -89,3 +89,83 @@ rules:
- {{ include "operator.name" . }}clusters/status
verbs:
- patch
- apiGroups:
- rbac.authorization.k8s.io
resources:
- clusterroles
verbs:
- bind
resourceNames:
- {{ include "operator.name" . }}-clusterrole
{{ if .Capabilities.APIVersions.Has "security.openshift.io/v1" }}
---
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
name: hdfs-scc
annotations:
kubernetes.io/description: |-
This resource is derived from hostmount-anyuid. It provides all the features of the
restricted SCC but allows host mounts and any UID by a pod. This is primarily
used by the persistent volume recycler. WARNING: this SCC allows host file
system access as any UID, including UID 0. Grant with caution.
release.openshift.io/create-only: "true"
allowHostDirVolumePlugin: true
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
allowPrivilegeEscalation: true
allowPrivilegedContainer: false
allowedCapabilities: null
defaultAddCapabilities: null
fsGroup:
type: RunAsAny
readOnlyRootFilesystem: false
runAsUser:
type: RunAsAny
seLinuxContext:
type: MustRunAs
supplementalGroups:
type: RunAsAny
volumes:
- configMap
- downwardAPI
- emptyDir
- hostPath
- nfs
- persistentVolumeClaim
- projected
- secret
- ephemeral
{{ end }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ include "operator.name" . }}-clusterrole
rules:
- apiGroups:
- ""
resources:
- configmaps
- secrets
- serviceaccounts
verbs:
- get
- apiGroups:
- events.k8s.io
resources:
- events
verbs:
- create
{{ if .Capabilities.APIVersions.Has "security.openshift.io/v1" }}
- apiGroups:
- security.openshift.io
resources:
- securitycontextconstraints
resourceNames:
- hdfs-scc
verbs:
- use
{{ end }}
28 changes: 28 additions & 0 deletions deploy/manifests/roles.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -89,3 +89,31 @@ rules:
- hdfsclusters/status
verbs:
- patch
- apiGroups:
- rbac.authorization.k8s.io
resources:
- clusterroles
verbs:
- bind
resourceNames:
- hdfs-clusterrole
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: hdfs-clusterrole
rules:
- apiGroups:
- ""
resources:
- configmaps
- secrets
- serviceaccounts
verbs:
- get
- apiGroups:
- events.k8s.io
resources:
- events
verbs:
- create
11 changes: 11 additions & 0 deletions rust/crd/src/error.rs
Original file line number Diff line number Diff line change
Expand Up @@ -126,6 +126,17 @@ pub enum Error {
JournalnodeJavaHeapConfig {
source: stackable_operator::error::Error,
},

#[error("failed to patch service account: {source}")]
ApplyServiceAccount {
name: String,
source: stackable_operator::error::Error,
},
#[error("failed to patch role binding: {source}")]
ApplyRoleBinding {
name: String,
source: stackable_operator::error::Error,
},
}
pub type HdfsOperatorResult<T> = std::result::Result<T, Error>;

Expand Down
31 changes: 29 additions & 2 deletions rust/operator/src/hdfs_controller.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@ use crate::config::{
CoreSiteConfigBuilder, HdfsNodeDataDirectory, HdfsSiteConfigBuilder, ROOT_DATA_DIR,
};
use crate::discovery::build_discovery_configmap;
use crate::OPERATOR_NAME;
use crate::{rbac, OPERATOR_NAME};
use stackable_hdfs_crd::error::{Error, HdfsOperatorResult};
use stackable_hdfs_crd::{constants::*, ROLE_PORTS};
use stackable_hdfs_crd::{HdfsCluster, HdfsPodRef, HdfsRole};
use stackable_operator::builder::{ConfigMapBuilder, ObjectMetaBuilder};
use stackable_operator::builder::{ConfigMapBuilder, ObjectMetaBuilder, PodSecurityContextBuilder};
use stackable_operator::client::Client;
use stackable_operator::k8s_openapi::api::core::v1::{
Container, ContainerPort, ObjectFieldSelector, PodSpec, PodTemplateSpec, Probe,
Expand All @@ -25,6 +25,7 @@ use stackable_operator::kube::api::ObjectMeta;
use stackable_operator::kube::runtime::controller::Action;
use stackable_operator::kube::runtime::events::{Event, EventType, Recorder, Reporter};
use stackable_operator::kube::runtime::reflector::ObjectRef;
use stackable_operator::kube::ResourceExt;
use stackable_operator::labels::role_group_selector_labels;
use stackable_operator::memory::to_java_heap;
use stackable_operator::product_config::{types::PropertyNameKind, ProductConfigManager};
Expand Down Expand Up @@ -72,6 +73,22 @@ pub async fn reconcile_hdfs(hdfs: Arc<HdfsCluster>, ctx: Arc<Ctx>) -> HdfsOperat

let dfs_replication = hdfs.spec.dfs_replication;

let (rbac_sa, rbac_rolebinding) = rbac::build_rbac_resources(hdfs.as_ref(), "hdfs");
client
.apply_patch(FIELD_MANAGER_SCOPE, &rbac_sa, &rbac_sa)
.await
.map_err(|source| Error::ApplyServiceAccount {
source,
name: rbac_sa.name_any(),
})?;
client
.apply_patch(FIELD_MANAGER_SCOPE, &rbac_rolebinding, &rbac_rolebinding)
.await
.map_err(|source| Error::ApplyRoleBinding {
source,
name: rbac_rolebinding.name_any(),
})?;

for (role_name, group_config) in validated_config.iter() {
let role: HdfsRole = serde_yaml::from_str(role_name).unwrap();
let role_ports = ROLE_PORTS.get(&role).unwrap().as_slice();
Expand Down Expand Up @@ -111,6 +128,7 @@ pub async fn reconcile_hdfs(hdfs: Arc<HdfsCluster>, ctx: Arc<Ctx>) -> HdfsOperat
&rolegroup_ref,
&namenode_podrefs,
&hadoop_container,
&rbac_sa.name_any(),
)?;

client
Expand Down Expand Up @@ -295,6 +313,7 @@ fn rolegroup_statefulset(
rolegroup_ref: &RoleGroupRef<HdfsCluster>,
namenode_podrefs: &[HdfsPodRef],
hadoop_container: &Container,
rbac_sa: &str,
) -> HdfsOperatorResult<StatefulSet> {
tracing::info!("Setting up StatefulSet for {:?}", rolegroup_ref);
let service_name = rolegroup_ref.object_name();
Expand Down Expand Up @@ -342,6 +361,14 @@ fn rolegroup_statefulset(
}),
..Volume::default()
}]),
service_account: Some(rbac_sa.to_string()),
security_context: Some(
PodSecurityContextBuilder::new()
.run_as_user(rbac::HDFS_UID)
.run_as_group(0)
.fs_group(1000) // Needed for secret-operator
.build(),
),
..PodSpec::default()
}),
};
Expand Down
1 change: 1 addition & 0 deletions rust/operator/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ mod config;
mod discovery;
mod hdfs_controller;
mod pod_svc_controller;
mod rbac;

use std::sync::Arc;

Expand Down
44 changes: 44 additions & 0 deletions rust/operator/src/rbac.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
use stackable_operator::builder::ObjectMetaBuilder;
use stackable_operator::k8s_openapi::api::core::v1::ServiceAccount;
use stackable_operator::k8s_openapi::api::rbac::v1::{RoleBinding, RoleRef, Subject};
use stackable_operator::kube::{Resource, ResourceExt};

/// Used as runAsUser in the pod security context. This is specified in the Hadoop image file
pub const HDFS_UID: i64 = 1000;

/// Build RBAC objects for the product workloads.
/// The `rbac_prefix` is meant to be the product name, for example: zookeeper, airflow, etc.
/// and it is a assumed that a ClusterRole named `{rbac_prefix}-clusterrole` exists.
pub fn build_rbac_resources<T: Resource>(
resource: &T,
rbac_prefix: &str,
) -> (ServiceAccount, RoleBinding) {
let sa_name = format!("{rbac_prefix}-sa");
let service_account = ServiceAccount {
metadata: ObjectMetaBuilder::new()
.name_and_namespace(resource)
.name(sa_name.clone())
.build(),
..ServiceAccount::default()
};

let role_binding = RoleBinding {
metadata: ObjectMetaBuilder::new()
.name_and_namespace(resource)
.name(format!("{rbac_prefix}-rolebinding"))
.build(),
role_ref: RoleRef {
kind: "ClusterRole".to_string(),
name: format!("{rbac_prefix}-clusterrole"),
api_group: "rbac.authorization.k8s.io".to_string(),
},
subjects: Some(vec![Subject {
kind: "ServiceAccount".to_string(),
name: sa_name,
namespace: resource.namespace(),
..Subject::default()
}]),
};

(service_account, role_binding)
}
2 changes: 1 addition & 1 deletion tests/templates/kuttl/fs-ops/01-assert.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ apiVersion: kuttl.dev/v1beta1
kind: TestAssert
metadata:
name: install-hdfs
timeout: 300
timeout: 600
---
apiVersion: apps/v1
kind: StatefulSet
Expand Down
2 changes: 1 addition & 1 deletion tests/templates/kuttl/fs-ops/02-webhdfs.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,6 @@ spec:
spec:
containers:
- name: webhdfs
image: python:3.10-slim
image: docker.stackable.tech/stackable/testing-tools:0.1.0-stackable0.1.0
stdin: true
tty: true
2 changes: 0 additions & 2 deletions tests/templates/kuttl/fs-ops/03-create-file.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,4 @@ kind: TestStep
commands:
- script: kubectl cp -n $NAMESPACE ./webhdfs.py webhdfs-0:/tmp
- script: kubectl cp -n $NAMESPACE ./testdata.txt webhdfs-0:/tmp
- script: kubectl cp -n $NAMESPACE ./requirements.txt webhdfs-0:/tmp
- script: kubectl exec -n $NAMESPACE webhdfs-0 -- pip install --user -r /tmp/requirements.txt
- script: kubectl exec -n $NAMESPACE webhdfs-0 -- python /tmp/webhdfs.py create
5 changes: 0 additions & 5 deletions tests/templates/kuttl/fs-ops/requirements.txt

This file was deleted.