-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add hdfs hive, hbase and hadoop addon #1375
Open
shanshanying
wants to merge
10
commits into
main
Choose a base branch
from
feat/add-hdfs-hive-spark
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
10 commits
Select commit
Hold shift + click to select a range
c2ec934
feat: add hdfs hive and spark addon
shanshanying 2e6dd79
pick updates for hadoop-hdfs, hive, hbase, and remove spark
shanshanying 4400617
add apiVersion annotation to cd/cmpd/cmpv
shanshanying d77541f
chore: auto generated files
shanshanying 3eee6e1
update vars and envs
shanshanying e4b3f66
add serviceRef for data node
shanshanying c018df1
update hive metastore template
shanshanying 2ddbc69
update hadoop stack example
shanshanying 3b9072c
add helm reource policy annotation to cmpd
shanshanying ba7b893
chore: auto generated files
shanshanying File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,25 @@ | ||
apiVersion: v2 | ||
name: hadoop-cluster | ||
description: A Hadoop cluster Helm chart for KubeBlocks. | ||
|
||
type: application | ||
|
||
version: 0.9.1 | ||
|
||
appVersion: 3.3.4 | ||
|
||
dependencies: | ||
- name: kblib | ||
version: 0.1.2 | ||
repository: file://../kblib | ||
alias: extra | ||
|
||
home: https://hadoop.apache.org/ | ||
icon: https://hadoop.apache.org/hadoop-logo.jpg | ||
keywords: | ||
- bigdata | ||
- mapreduce | ||
|
||
maintainers: | ||
- name: ApeCloud | ||
url: https://kubeblocks.io/ |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,60 @@ | ||
{{/* | ||
Expand the name of the chart. | ||
*/}} | ||
{{- define "hadoop-cluster.name" -}} | ||
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }} | ||
{{- end }} | ||
|
||
{{/* | ||
Create a default fully qualified app name. | ||
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). | ||
If release name contains chart name it will be used as a full name. | ||
*/}} | ||
{{- define "hadoop-cluster.fullname" -}} | ||
{{- if .Values.fullnameOverride }} | ||
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }} | ||
{{- else }} | ||
{{- $name := default .Chart.Name .Values.nameOverride }} | ||
{{- if contains $name .Release.Name }} | ||
{{- .Release.Name | trunc 63 | trimSuffix "-" }} | ||
{{- else }} | ||
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }} | ||
{{- end }} | ||
{{- end }} | ||
{{- end }} | ||
|
||
{{/* | ||
Create chart name and version as used by the chart label. | ||
*/}} | ||
{{- define "hadoop-cluster.chart" -}} | ||
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }} | ||
{{- end }} | ||
|
||
{{/* | ||
Common labels | ||
*/}} | ||
{{- define "hadoop-cluster.labels" -}} | ||
helm.sh/chart: {{ include "hadoop-cluster.chart" . }} | ||
{{ include "hadoop-cluster.selectorLabels" . }} | ||
app.kubernetes.io/version: {{ default .Chart.AppVersion .Values.appVersionOverride | quote }} | ||
app.kubernetes.io/managed-by: {{ .Release.Service }} | ||
{{- end }} | ||
|
||
{{/* | ||
Selector labels | ||
*/}} | ||
{{- define "hadoop-cluster.selectorLabels" -}} | ||
app.kubernetes.io/name: {{ include "hadoop-cluster.name" . }} | ||
app.kubernetes.io/instance: {{ .Release.Name }} | ||
{{- end }} | ||
|
||
{{- define "clustername" -}} | ||
{{ include "hadoop-cluster.fullname" .}} | ||
{{- end}} | ||
|
||
{{/* | ||
Create the name of the service account to use | ||
*/}} | ||
{{- define "hadoop-cluster.serviceAccountName" -}} | ||
{{- default (printf "kb-%s" (include "clustername" .)) .Values.serviceAccount.name }} | ||
{{- end }} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,105 @@ | ||
apiVersion: apps.kubeblocks.io/v1 | ||
kind: Cluster | ||
metadata: | ||
namespace: {{ .Release.Namespace }} | ||
name: {{ include "kblib.clusterName" . }} | ||
labels: {{ include "kblib.clusterLabels" . | nindent 4 }} | ||
spec: | ||
terminationPolicy: Delete | ||
clusterDef: hadoop-hdfs | ||
topology: hadoop-ha-cluster | ||
componentSpecs: | ||
- name: hadoop-core | ||
componentDef: hadoop-core | ||
serviceRefs: | ||
- name: hadoopZookeeper | ||
namespace: {{ .Values.serviceRefs.hadoopZookeeper.namespace }} | ||
clusterServiceSelector: | ||
cluster: {{ .Values.serviceRefs.hadoopZookeeper.clusterServiceSelector.cluster }} | ||
service: | ||
component: {{ .Values.serviceRefs.hadoopZookeeper.clusterServiceSelector.service.component }} | ||
service: {{ .Values.serviceRefs.hadoopZookeeper.clusterServiceSelector.service.service }} | ||
port: {{ .Values.serviceRefs.hadoopZookeeper.clusterServiceSelector.service.port }} | ||
replicas: {{ .Values.replicas.core }} | ||
resources: | ||
requests: | ||
cpu: {{ .Values.resources.core.requests.cpu | quote }} | ||
memory: {{ .Values.resources.core.requests.memory }} | ||
limits: | ||
cpu: {{ .Values.resources.core.limits.cpu | quote }} | ||
memory: {{ .Values.resources.core.limits.memory }} | ||
- name: journalnode | ||
componentDef: hdfs-journalnode | ||
replicas: {{ .Values.replicas.journalnode }} | ||
resources: | ||
requests: | ||
cpu: {{ .Values.resources.journalnode.requests.cpu | quote }} | ||
memory: {{ .Values.resources.journalnode.requests.memory }} | ||
limits: | ||
cpu: {{ .Values.resources.journalnode.limits.cpu | quote }} | ||
memory: {{ .Values.resources.journalnode.limits.memory }} | ||
volumes: | ||
- name: hadoop-core-config | ||
configMap: | ||
name: {{ include "kblib.clusterName" . }}-hadoop-core-config | ||
volumeClaimTemplates: | ||
- name: edits-dir | ||
spec: | ||
accessModes: | ||
- ReadWriteOnce | ||
resources: | ||
requests: | ||
storage: {{ .Values.storage.journalnode }} | ||
- name: namenode | ||
componentDef: hdfs-namenode | ||
serviceRefs: | ||
- name: hadoopZookeeper | ||
namespace: {{ .Values.serviceRefs.hadoopZookeeper.namespace }} | ||
clusterServiceSelector: | ||
cluster: {{ .Values.serviceRefs.hadoopZookeeper.clusterServiceSelector.cluster }} | ||
service: | ||
component: {{ .Values.serviceRefs.hadoopZookeeper.clusterServiceSelector.service.component }} | ||
service: {{ .Values.serviceRefs.hadoopZookeeper.clusterServiceSelector.service.service }} | ||
port: {{ .Values.serviceRefs.hadoopZookeeper.clusterServiceSelector.service.port }} | ||
replicas: {{ .Values.replicas.namenode }} | ||
resources: | ||
requests: | ||
cpu: {{ .Values.resources.namenode.requests.cpu | quote }} | ||
memory: {{ .Values.resources.namenode.requests.memory }} | ||
limits: | ||
cpu: {{ .Values.resources.namenode.limits.cpu | quote }} | ||
memory: {{ .Values.resources.namenode.limits.memory }} | ||
volumes: | ||
- name: hadoop-core-config | ||
configMap: | ||
name: {{ include "kblib.clusterName" . }}-hadoop-core-config | ||
volumeClaimTemplates: | ||
- name: metadata-dir | ||
spec: | ||
accessModes: | ||
- ReadWriteOnce | ||
resources: | ||
requests: | ||
storage: {{ .Values.storage.namenode }} | ||
- name: datanode | ||
componentDef: hdfs-datanode | ||
replicas: {{ .Values.replicas.datanode }} | ||
resources: | ||
requests: | ||
cpu: {{ .Values.resources.datanode.requests.cpu | quote }} | ||
memory: {{ .Values.resources.datanode.requests.memory }} | ||
limits: | ||
cpu: {{ .Values.resources.datanode.limits.cpu | quote }} | ||
memory: {{ .Values.resources.datanode.limits.memory }} | ||
volumes: | ||
- name: hadoop-core-config | ||
configMap: | ||
name: {{ include "kblib.clusterName" . }}-hadoop-core-config | ||
volumeClaimTemplates: | ||
- name: hdfs-data-0 | ||
spec: | ||
accessModes: | ||
- ReadWriteOnce | ||
resources: | ||
requests: | ||
storage: {{ .Values.storage.datanode }} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,122 @@ | ||
{ | ||
"$schema": "http://json-schema.org/draft-07/schema#", | ||
"type": "object", | ||
"properties": { | ||
"replicas": { | ||
"type": "object", | ||
"properties": { | ||
"core": { "type": "integer", "default": 1, "minimum": 1, "maximum": 5 }, | ||
"journalnode": { "type": "integer", "default": 3, "minimum": 1, "maximum": 5 }, | ||
"namenode": { "type": "integer", "default": 2, "minimum": 1, "maximum": 5 }, | ||
"datanode": { "type": "integer", "default": 1, "minimum": 1, "maximum": 5 } | ||
}, | ||
"required": ["core", "journalnode", "namenode", "datanode"], | ||
"description": "Number of replicas for each component." | ||
}, | ||
"resources": { | ||
"type": "object", | ||
"properties": { | ||
"core": { "$ref": "#/definitions/resourceSchema" }, | ||
"journalnode": { "$ref": "#/definitions/resourceSchema" }, | ||
"namenode": { "$ref": "#/definitions/resourceSchema" }, | ||
"datanode": { "$ref": "#/definitions/resourceSchema" } | ||
}, | ||
"required": ["core", "journalnode", "namenode", "datanode"], | ||
"description": "Resource requests and limits for each component." | ||
}, | ||
"storage": { | ||
"type": "object", | ||
"properties": { | ||
"journalnode": { "type": "string", "default": "10Gi" }, | ||
"namenode": { "type": "string", "default": "10Gi" }, | ||
"datanode": { "type": "string", "default": "30Gi" } | ||
}, | ||
"required": ["journalnode", "namenode", "datanode"], | ||
"description": "Storage requirements for each component." | ||
}, | ||
"serviceRefs": { | ||
"type": "object", | ||
"properties": { | ||
"hadoopZookeeper": { | ||
"type": "object", | ||
"properties": { | ||
"namespace": { "type": "string", "default": "default" }, | ||
"clusterServiceSelector": { | ||
"type": "object", | ||
"properties": { | ||
"cluster": { "type": "string", "default": "zkcluster" }, | ||
"service": { | ||
"type": "object", | ||
"properties": { | ||
"component": { "type": "string", "default": "zookeeper" }, | ||
"service": { "type": "string", "default": "headless" }, | ||
"port": { "type": "string", "default": "client" } | ||
}, | ||
"required": ["component", "service", "port"] | ||
} | ||
}, | ||
"required": ["cluster", "service"] | ||
} | ||
}, | ||
"required": ["namespace", "clusterServiceSelector"], | ||
"description": "References to external services." | ||
} | ||
}, | ||
"required": ["hadoopZookeeper"], | ||
"description": "Service references for external dependencies." | ||
}, | ||
"serviceAccount": { | ||
"type": "object", | ||
"properties": { | ||
"name": { "type": "string", "default": "kb-hadoop-cluster" }, | ||
"create": { "type": "boolean", "default": true } | ||
}, | ||
"required": ["name", "create"], | ||
"description": "Configuration for the service account." | ||
}, | ||
"terminationPolicy": { | ||
"type": "string", | ||
"default": "Delete", | ||
"description": "Policy for terminating the release." | ||
}, | ||
"clusterDef": { | ||
"type": "string", | ||
"default": "hadoop-hdfs", | ||
"description": "Reference to the cluster definition." | ||
}, | ||
"topology": { | ||
"type": "string", | ||
"default": "hadoop-ha-cluster", | ||
"description": "Cluster topology configuration." | ||
} | ||
}, | ||
"required": ["replicas", "resources", "storage", "serviceRefs", "serviceAccount", "terminationPolicy", "clusterDef", "topology"], | ||
"definitions": { | ||
"resourceSchema": { | ||
"type": "object", | ||
"properties": { | ||
"requests": { | ||
"type": "object", | ||
"properties": { | ||
"cpu": { "type": "string", "default": "0.5" }, | ||
"memory": { "type": "string", "default": "0.5Gi" } | ||
}, | ||
"required": ["cpu", "memory"], | ||
"description": "Resource requests for the component." | ||
}, | ||
"limits": { | ||
"type": "object", | ||
"properties": { | ||
"cpu": { "type": "string", "default": "0.5" }, | ||
"memory": { "type": "string", "default": "2Gi" } | ||
}, | ||
"required": ["cpu", "memory"], | ||
"description": "Resource limits for the component." | ||
} | ||
}, | ||
"required": ["requests", "limits"], | ||
"description": "Resource configuration for a component." | ||
} | ||
}, | ||
"description": "Schema for Hadoop HDFS Helm chart values." | ||
} |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
0.9.1
->1.0.0-alpha.0