aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--README.md39
-rw-r--r--configure/gke-storageclass-broker-pd.yml8
-rw-r--r--configure/gke-storageclass-zookeeper-ssd.yml8
-rw-r--r--configure/minikube-storageclass-broker.yml6
-rw-r--r--configure/minikube-storageclass-zookeeper.yml6
-rw-r--r--kafka/00namespace.yml (renamed from 00namespace.yml)0
-rw-r--r--kafka/10broker-config.yml (renamed from 10broker-config.yml)38
-rw-r--r--kafka/20dns.yml (renamed from 20dns.yml)0
-rw-r--r--kafka/30bootstrap-service.yml11
-rw-r--r--kafka/50kafka.yml (renamed from 50kafka.yml)45
-rw-r--r--kafka/test/00namespace.yml (renamed from test/00namespace.yml)0
-rw-r--r--kafka/test/kafkacat.yml183
-rw-r--r--kafka/test/produce-consume.yml166
-rw-r--r--outside-services/outside-0.yml15
-rw-r--r--outside-services/outside-1.yml15
-rw-r--r--outside-services/outside-2.yml15
-rwxr-xr-xprod-yolean.sh50
-rw-r--r--rbac-namespace-default/node-reader.yml37
-rw-r--r--test/basic-produce-consume.yml89
-rw-r--r--test/basic-with-kafkacat.yml103
-rw-r--r--yahoo-kafka-manager/kafka-manager-service.yml12
-rw-r--r--yahoo-kafka-manager/kafka-manager.yml26
-rw-r--r--zookeeper/50pzoo.yml24
-rw-r--r--zookeeper/51zoo.yml20
24 files changed, 627 insertions, 289 deletions
diff --git a/README.md b/README.md
index 9853d12..526b7d5 100644
--- a/README.md
+++ b/README.md
@@ -1,4 +1,5 @@
-
+_Manifests here require Kubernetes 1.8 now.
+On earlier versions use [v2.1.0](https://github.com/Yolean/kubernetes-kafka/tree/v2.1.0)._
# Kafka on Kubernetes
@@ -6,16 +7,21 @@ Transparent Kafka setup that you can grow with.
Good for both experiments and production.
How to use:
+ * Good to know: you'll likely want to fork this repo. It prioritizes clarity over configurability, using plain manifests and .propeties files; no client side logic.
* Run a Kubernetes cluster, [minikube](https://github.com/kubernetes/minikube) or real.
* Quickstart: use the `kubectl apply`s below.
- * Kafka for real: fork and have a look at [addon](https://github.com/Yolean/kubernetes-kafka/labels/addon)s.
+ * Have a look at [addon](https://github.com/Yolean/kubernetes-kafka/labels/addon)s, or the official forks:
+ - [kubernetes-kafka-small](https://github.com/Reposoft/kubernetes-kafka-small) for single-node clusters like Minikube.
+ - [StreamingMicroservicesPlatform](https://github.com/StreamingMicroservicesPlatform/kubernetes-kafka) Like Confluent's [platform quickstart](https://docs.confluent.io/current/connect/quickstart.html) but for Kubernetes.
* Join the discussion in issues and PRs.
-Why?
-See for yourself, but we think this project gives you better adaptability than [helm](https://github.com/kubernetes/helm) [chart](https://github.com/kubernetes/charts/tree/master/incubator/kafka)s. No single readable readme or template can properly introduce both Kafka and Kubernets.
+No readable readme can properly introduce both [Kafka](http://kafka.apache.org/) and [Kubernetes](https://kubernetes.io/),
+but we think the combination of the two is a great backbone for microservices.
Back when we read [Newman](http://samnewman.io/books/building_microservices/) we were beginners with both.
Now we've read [Kleppmann](http://dataintensive.net/), [Confluent](https://www.confluent.io/blog/) and [SRE](https://landing.google.com/sre/book.html) and enjoy this "Streaming Platform" lock-in :smile:.
+We also think the plain-yaml approach of this project is easier to understand and evolve than [helm](https://github.com/kubernetes/helm) [chart](https://github.com/kubernetes/charts/tree/master/incubator/kafka)s.
+
## What you get
Keep an eye on `kubectl --namespace kafka get pods -w`.
@@ -25,6 +31,12 @@ The goal is to provide [Bootstrap servers](http://kafka.apache.org/documentation
Zookeeper at `zookeeper.kafka.svc.cluster.local:2181`.
+## Prepare storage classes
+
+For Minikube run `kubectl apply -f configure/minikube-storageclass-broker.yml; kubectl apply -f configure/minikube-storageclass-zookeeper.yml`.
+
+There's a similar setup for GKE, `configure/gke-*`. You might want to tweak it before creating.
+
## Start Zookeeper
The [Kafka book](https://www.confluent.io/resources/kafka-definitive-guide-preview-edition/) recommends that Kafka has its own Zookeeper cluster with at least 5 instances.
@@ -38,7 +50,7 @@ To support automatic migration in the face of availability zone unavailability w
## Start Kafka
```
-kubectl apply -f ./
+kubectl apply -f ./kafka/
```
You might want to verify in logs that Kafka found its own DNS name(s) correctly. Look for records like:
@@ -50,12 +62,19 @@ kubectl -n kafka logs kafka-0 | grep "Registered broker"
That's it. Just add business value :wink:.
For clients we tend to use [librdkafka](https://github.com/edenhill/librdkafka)-based drivers like [node-rdkafka](https://github.com/Blizzard/node-rdkafka).
To use [Kafka Connect](http://kafka.apache.org/documentation/#connect) and [Kafka Streams](http://kafka.apache.org/documentation/streams/) you may want to take a look at our [sample](https://github.com/solsson/dockerfiles/tree/master/connect-files) [Dockerfile](https://github.com/solsson/dockerfiles/tree/master/streams-logfilter)s.
-Don't forget the [addon](https://github.com/Yolean/kubernetes-kafka/labels/addon)s.
-# Tests
+## RBAC
+For clusters that enfoce [RBAC](https://kubernetes.io/docs/admin/authorization/rbac/) there's a minimal set of policies in
```
-kubectl apply -f test/
-# Anything that isn't READY here is a failed test
-kubectl get pods -l test-target=kafka,test-type=readiness -w --all-namespaces
+kubectl apply -f rbac-namespace-default/
```
+
+## Tests
+
+Tests are based on the [kube-test](https://github.com/Yolean/kube-test) concept.
+Like the rest of this repo they have `kubectl` as the only local dependency.
+
+Run self-tests or not. They do generate some load, but indicate if the platform is working or not.
+ * To include tests, replace `apply -f` with `apply -R -f` in your `kubectl`s above.
+ * Anything that isn't READY in `kubectl get pods -l test-type=readiness --namespace=test-kafka` is a failed test.
diff --git a/configure/gke-storageclass-broker-pd.yml b/configure/gke-storageclass-broker-pd.yml
new file mode 100644
index 0000000..dbb7203
--- /dev/null
+++ b/configure/gke-storageclass-broker-pd.yml
@@ -0,0 +1,8 @@
+kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+ name: kafka-broker
+provisioner: kubernetes.io/gce-pd
+reclaimPolicy: Retain
+parameters:
+ type: pd-standard
diff --git a/configure/gke-storageclass-zookeeper-ssd.yml b/configure/gke-storageclass-zookeeper-ssd.yml
new file mode 100644
index 0000000..5d6673a
--- /dev/null
+++ b/configure/gke-storageclass-zookeeper-ssd.yml
@@ -0,0 +1,8 @@
+kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+ name: kafka-zookeeper
+provisioner: kubernetes.io/gce-pd
+reclaimPolicy: Retain
+parameters:
+ type: pd-ssd
diff --git a/configure/minikube-storageclass-broker.yml b/configure/minikube-storageclass-broker.yml
new file mode 100644
index 0000000..ae930b4
--- /dev/null
+++ b/configure/minikube-storageclass-broker.yml
@@ -0,0 +1,6 @@
+kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+ name: kafka-broker
+provisioner: k8s.io/minikube-hostpath
+reclaimPolicy: Retain
diff --git a/configure/minikube-storageclass-zookeeper.yml b/configure/minikube-storageclass-zookeeper.yml
new file mode 100644
index 0000000..48c0f35
--- /dev/null
+++ b/configure/minikube-storageclass-zookeeper.yml
@@ -0,0 +1,6 @@
+kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+ name: kafka-zookeeper
+provisioner: k8s.io/minikube-hostpath
+reclaimPolicy: Retain
diff --git a/00namespace.yml b/kafka/00namespace.yml
index a6cf001..a6cf001 100644
--- a/00namespace.yml
+++ b/kafka/00namespace.yml
diff --git a/10broker-config.yml b/kafka/10broker-config.yml
index af0f037..c0c4c8c 100644
--- a/10broker-config.yml
+++ b/kafka/10broker-config.yml
@@ -8,8 +8,31 @@ data:
#!/bin/bash
set -x
- export KAFKA_BROKER_ID=${HOSTNAME##*-}
- sed -i "s/\${KAFKA_BROKER_ID}/$KAFKA_BROKER_ID/" /etc/kafka/server.properties
+ KAFKA_BROKER_ID=${HOSTNAME##*-}
+ sed -i "s/#init#broker.id=#init#/broker.id=$KAFKA_BROKER_ID/" /etc/kafka/server.properties
+
+ hash kubectl 2>/dev/null || {
+ sed -i "s/#init#broker.rack=#init#/#init#broker.rack=# kubectl not found in path/" /etc/kafka/server.properties
+ } && {
+ ZONE=$(kubectl get node "$NODE_NAME" -o=go-template='{{index .metadata.labels "failure-domain.beta.kubernetes.io/zone"}}')
+ if [ $? -ne 0 ]; then
+ sed -i "s/#init#broker.rack=#init#/#init#broker.rack=# zone lookup failed, see -c init-config logs/" /etc/kafka/server.properties
+ elif [ "x$ZONE" == "x<no value>" ]; then
+ sed -i "s/#init#broker.rack=#init#/#init#broker.rack=# zone label not found for node $NODE_NAME/" /etc/kafka/server.properties
+ else
+ sed -i "s/#init#broker.rack=#init#/broker.rack=$ZONE/" /etc/kafka/server.properties
+ fi
+
+ kubectl -n $POD_NAMESPACE label pod $POD_NAME kafka-broker-id=$KAFKA_BROKER_ID
+
+ OUTSIDE_HOST=$(kubectl get node "$NODE_NAME" -o jsonpath='{.status.addresses[?(@.type=="InternalIP")].address}')
+ if [ $? -ne 0 ]; then
+ echo "Outside (i.e. cluster-external access) host lookup command failed"
+ else
+ OUTSIDE_HOST=${OUTSIDE_HOST}:3240${KAFKA_BROKER_ID}
+ sed -i "s|#init#advertised.listeners=OUTSIDE://#init#|advertised.listeners=OUTSIDE://${OUTSIDE_HOST}|" /etc/kafka/server.properties
+ fi
+ }
server.properties: |-
# Licensed to the Apache Software Foundation (ASF) under one or more
@@ -32,10 +55,9 @@ data:
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
- broker.id=${KAFKA_BROKER_ID}
+ #init#broker.id=#init#
- # Switch to enable topic deletion or not, default value is false
- #delete.topic.enable=true
+ #init#broker.rack=#init#
############################# Socket Server Settings #############################
@@ -46,14 +68,18 @@ data:
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092
+ listeners=OUTSIDE://:9094,PLAINTEXT://:9092
# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092
+ #init#advertised.listeners=OUTSIDE://#init#,PLAINTEXT://:9092
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
+ listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL,OUTSIDE:PLAINTEXT
+ inter.broker.listener.name=PLAINTEXT
# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3
@@ -170,7 +196,7 @@ data:
# Unspecified loggers and loggers with additivity=true output to server.log and stdout
# Note that INFO only applies to unspecified loggers, the log level of the child logger is used otherwise
- log4j.rootLogger=INFO, stdout, kafkaAppender
+ log4j.rootLogger=INFO, stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
diff --git a/20dns.yml b/kafka/20dns.yml
index 4088c31..4088c31 100644
--- a/20dns.yml
+++ b/kafka/20dns.yml
diff --git a/kafka/30bootstrap-service.yml b/kafka/30bootstrap-service.yml
new file mode 100644
index 0000000..7c2a337
--- /dev/null
+++ b/kafka/30bootstrap-service.yml
@@ -0,0 +1,11 @@
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: bootstrap
+ namespace: kafka
+spec:
+ ports:
+ - port: 9092
+ selector:
+ app: kafka
diff --git a/50kafka.yml b/kafka/50kafka.yml
index 4404a6b..1157235 100644
--- a/50kafka.yml
+++ b/kafka/50kafka.yml
@@ -1,11 +1,17 @@
-apiVersion: apps/v1beta1
+apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: kafka
namespace: kafka
spec:
+spec:
+ selector:
+ matchLabels:
+ app: kafka
serviceName: "broker"
replicas: 3
+ updateStrategy:
+ type: OnDelete
template:
metadata:
labels:
@@ -15,19 +21,39 @@ spec:
terminationGracePeriodSeconds: 30
initContainers:
- name: init-config
- image: solsson/kafka:0.11.0.0@sha256:b27560de08d30ebf96d12e74f80afcaca503ad4ca3103e63b1fd43a2e4c976ce
+ image: solsson/kafka-initutils@sha256:c98d7fb5e9365eab391a5dcd4230fc6e72caf929c60f29ff091e3b0215124713
+ env:
+ - name: NODE_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: spec.nodeName
+ - name: POD_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.name
+ - name: POD_NAMESPACE
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.namespace
command: ['/bin/bash', '/etc/kafka/init.sh']
volumeMounts:
- name: config
mountPath: /etc/kafka
containers:
- name: broker
- image: solsson/kafka:0.11.0.0@sha256:b27560de08d30ebf96d12e74f80afcaca503ad4ca3103e63b1fd43a2e4c976ce
+ image: solsson/kafka:1.0.0@sha256:17fdf1637426f45c93c65826670542e36b9f3394ede1cb61885c6a4befa8f72d
env:
- name: KAFKA_LOG4J_OPTS
value: -Dlog4j.configuration=file:/etc/kafka/log4j.properties
+ - name: JMX_PORT
+ value: "5555"
ports:
- - containerPort: 9092
+ - name: inside
+ containerPort: 9092
+ - name: outside
+ containerPort: 9094
+ - name: jmx
+ containerPort: 5555
command:
- ./bin/kafka-server-start.sh
- /etc/kafka/server.properties
@@ -43,12 +69,10 @@ spec:
requests:
cpu: 100m
memory: 512Mi
- livenessProbe:
- exec:
- command:
- - /bin/sh
- - -c
- - 'echo "" | nc -w 1 127.0.0.1 9092'
+ readinessProbe:
+ tcpSocket:
+ port: 9092
+ timeoutSeconds: 1
volumeMounts:
- name: config
mountPath: /etc/kafka
@@ -63,6 +87,7 @@ spec:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
+ storageClassName: kafka-broker
resources:
requests:
storage: 200Gi
diff --git a/test/00namespace.yml b/kafka/test/00namespace.yml
index fbb6e0e..fbb6e0e 100644
--- a/test/00namespace.yml
+++ b/kafka/test/00namespace.yml
diff --git a/kafka/test/kafkacat.yml b/kafka/test/kafkacat.yml
new file mode 100644
index 0000000..8d6c64e
--- /dev/null
+++ b/kafka/test/kafkacat.yml
@@ -0,0 +1,183 @@
+---
+kind: ConfigMap
+metadata:
+ name: kafkacat
+ namespace: test-kafka
+apiVersion: v1
+data:
+
+ setup.sh: |-
+ touch /tmp/testlog
+
+ tail -f /tmp/testlog
+
+ test.sh: |-
+ exec >> /tmp/testlog
+ exec 2>&1
+
+ PC_WAIT=.2
+ MAX_AGE=5
+
+ UNIQUE="${HOSTNAME}@$(date -u -Ins)"
+
+ echo "${UNIQUE: -41:5}:Test $UNIQUE" >> /shared/produce.tmp
+ sleep $PC_WAIT
+ LAST=$(tail -n 1 /shared/consumed.tmp)
+ [ -z "$LAST" ] && echo "Nothing consumed" && exit 1
+
+ LAST_TS=$(echo $LAST | awk -F';' '{print $1}')
+ [ -z "$LAST_TS" ] && echo "Failed to get timestamp for message: $LAST" && exit 1
+ LAST_MSG=$(echo $LAST | awk -F';' '{print $4}')
+ NOW=$(date -u +%s%3N)
+ DIFF_S=$((($NOW - $LAST_TS)/1000))
+ DIFF_MS=$((($NOW - $LAST_TS)%1000))
+ #echo "$NOW ($(date +%FT%H:%M:%S.%3N)):"
+ #echo "$LAST_TS"
+
+ if [ $DIFF_S -gt $MAX_AGE ]; then
+ echo "Last message is $DIFF_S.$DIFF_MS old:"
+ echo "$LAST_MSG"
+ exit 10
+ fi
+
+ if [[ "$LAST_MSG" != *"$UNIQUE" ]]; then
+ echo "Last message (at $LAST_TS) isn't from this test run ($UNIQUE):"
+ echo "$LAST_MSG"
+ exit 11
+ fi
+
+ # get info about this message
+ kafkacat -Q -b $BOOTSTRAP -t test-kafkacat:0:$LAST_TS \
+ -X socket.timeout.ms=600 -X session.timeout.ms=300 -X request.timeout.ms=50 -X metadata.request.timeout.ms=600
+ [ $? -eq 0 ] || echo "At $(date +%FT%H:%M:%S.%3N) bootstrap broker(s) might be down"
+ # but don't fail the test; producer and consumer should keep going if there are other brokers
+
+ # We haven't asserted that the consumer works, so we'll just have to assume that it will exit if it fails
+
+ exit 0
+
+ quit-on-nonzero-exit.sh: |-
+ exec >> /tmp/testlog
+ exec 2>&1
+
+ exit 0
+---
+apiVersion: batch/v1
+kind: Job
+metadata:
+ name: topic-test-kafkacat
+ namespace: test-kafka
+spec:
+ template:
+ spec:
+ containers:
+ - name: topic-create
+ image: solsson/kafka:1.0.0@sha256:17fdf1637426f45c93c65826670542e36b9f3394ede1cb61885c6a4befa8f72d
+ command:
+ - ./bin/kafka-topics.sh
+ - --zookeeper
+ - zookeeper.kafka.svc.cluster.local:2181
+ - --create
+ - --if-not-exists
+ - --topic
+ - test-kafkacat
+ - --partitions
+ - "1"
+ - --replication-factor
+ - "2"
+ restartPolicy: Never
+---
+apiVersion: apps/v1beta2
+kind: ReplicaSet
+metadata:
+ name: kafkacat
+ namespace: test-kafka
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ test-target: kafka-client-kafkacat
+ test-type: readiness
+ template:
+ metadata:
+ labels:
+ test-target: kafka-client-kafkacat
+ test-type: readiness
+ # for example:
+ # readonly - can be used in production
+ # isolated - read/write but in a manner that does not affect other services
+ # load - unsuitable for production because it uses significant resources
+ # chaos - unsuitable for production because it injects failure modes
+ #test-use:
+ spec:
+ containers:
+ - name: producer
+ image: solsson/kafkacat@sha256:b32eedf936f3cde44cd164ddc77dfcf7565a8af4e357ff6de1abe4389ca530c9
+ env:
+ - name: BOOTSTRAP
+ value: kafka-0.broker.kafka.svc.cluster.local:9092
+ command:
+ - /bin/bash
+ - -cex
+ - >
+ echo "--- start $HOSTNAME $(date --iso-8601='ns' -u) ---" >> /shared/produce.tmp
+ ;
+ tail -f /shared/produce.tmp |
+ kafkacat -P -b $BOOTSTRAP -t test-kafkacat -v -T -d broker -K ':'
+ ;
+ volumeMounts:
+ - name: config
+ mountPath: /test
+ - name: shared
+ mountPath: /shared
+ - name: consumer
+ image: solsson/kafkacat@sha256:b32eedf936f3cde44cd164ddc77dfcf7565a8af4e357ff6de1abe4389ca530c9
+ env:
+ - name: BOOTSTRAP
+ value: kafka-0.broker.kafka.svc.cluster.local:9092
+ command:
+ - /bin/bash
+ - -cex
+ - >
+ kafkacat -C -b $BOOTSTRAP -t test-kafkacat -o -1 -f '%T;%k:%p;%o;%s\n' -u -d broker |
+ tee /shared/consumed.tmp
+ ;
+ volumeMounts:
+ - name: config
+ mountPath: /test
+ - name: shared
+ mountPath: /shared
+ - name: testcase
+ image: solsson/kafkacat@sha256:b32eedf936f3cde44cd164ddc77dfcf7565a8af4e357ff6de1abe4389ca530c9
+ env:
+ - name: BOOTSTRAP
+ value: bootstrap.kafka:9092
+ command:
+ - /bin/bash
+ - -e
+ - /test/setup.sh
+ readinessProbe:
+ exec:
+ command:
+ - /bin/bash
+ - -e
+ - /test/test.sh
+ initialDelaySeconds: 10
+ periodSeconds: 10
+ livenessProbe:
+ exec:
+ command:
+ - /bin/bash
+ - -e
+ - /test/quit-on-nonzero-exit.sh
+ volumeMounts:
+ - name: config
+ mountPath: /test
+ - name: shared
+ mountPath: /shared
+ volumes:
+ - name: config
+ configMap:
+ name: kafkacat
+ - name: shared
+ emptyDir: {}
diff --git a/kafka/test/produce-consume.yml b/kafka/test/produce-consume.yml
new file mode 100644
index 0000000..a326f01
--- /dev/null
+++ b/kafka/test/produce-consume.yml
@@ -0,0 +1,166 @@
+---
+kind: ConfigMap
+metadata:
+ name: produce-consume
+ namespace: test-kafka
+apiVersion: v1
+data:
+
+ setup.sh: |-
+ touch /tmp/testlog
+
+ tail -f /tmp/testlog
+
+ test.sh: |-
+ exec >> /tmp/testlog
+ exec 2>&1
+
+ # As low as in kafkacat based test didn't work, but this value can likely be squeezed
+ PC_WAIT=2.0
+
+ UNIQUE="${HOSTNAME}@$(date -u -Ins)"
+
+ echo "Test $UNIQUE" >> /shared/produce.tmp
+ sleep $PC_WAIT
+ LAST=$(tail -n 1 /shared/consumed.tmp)
+ [ -z "$LAST" ] && echo "Nothing consumed yet" && exit 1
+
+ # Haven't found how to get message timestamp in console-consumer, see kafkacat based test instead
+ LAST_MSG=$LAST
+
+ if [[ "$LAST_MSG" != *"$UNIQUE" ]]; then
+ echo "Last message (at $(date +%FT%T)) isn't from this test run ($UNIQUE):"
+ echo "$LAST_MSG"
+ exit 11
+ fi
+
+ echo "OK ($LAST_MSG at $(date +%FT%T))"
+ # We haven't asserted that the consumer works, so we'll just have to assume that it will exit if it fails
+
+ exit 0
+
+ quit-on-nonzero-exit.sh: |-
+ exec >> /tmp/testlog
+ exec 2>&1
+
+ exit 0
+---
+apiVersion: batch/v1
+kind: Job
+metadata:
+ name: topic-test-produce-consume
+ namespace: test-kafka
+spec:
+ template:
+ spec:
+ containers:
+ - name: topic-create
+ image: solsson/kafka:1.0.0@sha256:17fdf1637426f45c93c65826670542e36b9f3394ede1cb61885c6a4befa8f72d
+ command:
+ - ./bin/kafka-topics.sh
+ - --zookeeper
+ - zookeeper.kafka.svc.cluster.local:2181
+ - --create
+ - --if-not-exists
+ - --topic
+ - test-produce-consume
+ - --partitions
+ - "1"
+ - --replication-factor
+ - "2"
+ restartPolicy: Never
+---
+apiVersion: apps/v1beta2
+kind: ReplicaSet
+metadata:
+ name: produce-consume
+ namespace: test-kafka
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ test-target: kafka-client-console
+ test-type: readiness
+ template:
+ metadata:
+ labels:
+ test-target: kafka-client-console
+ test-type: readiness
+ # for example:
+ # readonly - can be used in production
+ # isolated - read/write but in a manner that does not affect other services
+ # load - unsuitable for production because it uses significant resources
+ # chaos - unsuitable for production because it injects failure modes
+ #test-use:
+ spec:
+ containers:
+ - name: producer
+ image: solsson/kafka:1.0.0@sha256:17fdf1637426f45c93c65826670542e36b9f3394ede1cb61885c6a4befa8f72d
+ env:
+ - name: BOOTSTRAP
+ value: kafka-0.broker.kafka.svc.cluster.local:9092
+ command:
+ - /bin/bash
+ - -cex
+ - >
+ echo "--- start $HOSTNAME $(date --iso-8601='ns' -u) ---" >> /shared/produce.tmp
+ ;
+ tail -f /shared/produce.tmp |
+ ./bin/kafka-console-producer.sh --broker-list $BOOTSTRAP --topic test-produce-consume
+ ;
+ volumeMounts:
+ - name: config
+ mountPath: /test
+ - name: shared
+ mountPath: /shared
+ - name: consumer
+ image: solsson/kafka:1.0.0@sha256:17fdf1637426f45c93c65826670542e36b9f3394ede1cb61885c6a4befa8f72d
+ env:
+ - name: BOOTSTRAP
+ value: kafka-0.broker.kafka.svc.cluster.local:9092
+ command:
+ - /bin/bash
+ - -cex
+ - >
+ ./bin/kafka-console-consumer.sh --bootstrap-server $BOOTSTRAP --topic test-produce-consume |
+ tee /shared/consumed.tmp
+ ;
+ volumeMounts:
+ - name: config
+ mountPath: /test
+ - name: shared
+ mountPath: /shared
+ - name: testcase
+ image: solsson/kafkacat@sha256:ebebf47061300b14a4b4c2e1e4303ab29f65e4b95d34af1b14bb8f7ec6da7cef
+ env:
+ - name: BOOTSTRAP
+ value: kafka-0.broker.kafka.svc.cluster.local:9092
+ command:
+ - /bin/bash
+ - -e
+ - /test/setup.sh
+ readinessProbe:
+ exec:
+ command:
+ - /bin/bash
+ - -e
+ - /test/test.sh
+ initialDelaySeconds: 10
+ periodSeconds: 10
+ livenessProbe:
+ exec:
+ command:
+ - /bin/bash
+ - -e
+ - /test/quit-on-nonzero-exit.sh
+ volumeMounts:
+ - name: config
+ mountPath: /test
+ - name: shared
+ mountPath: /shared
+ volumes:
+ - name: config
+ configMap:
+ name: produce-consume
+ - name: shared
+ emptyDir: {}
diff --git a/outside-services/outside-0.yml b/outside-services/outside-0.yml
new file mode 100644
index 0000000..7bc12bd
--- /dev/null
+++ b/outside-services/outside-0.yml
@@ -0,0 +1,15 @@
+kind: Service
+apiVersion: v1
+metadata:
+ name: outside-0
+ namespace: kafka
+spec:
+ selector:
+ app: kafka
+ kafka-broker-id: "0"
+ ports:
+ - protocol: TCP
+ targetPort: 9094
+ port: 32400
+ nodePort: 32400
+ type: NodePort \ No newline at end of file
diff --git a/outside-services/outside-1.yml b/outside-services/outside-1.yml
new file mode 100644
index 0000000..1642ee0
--- /dev/null
+++ b/outside-services/outside-1.yml
@@ -0,0 +1,15 @@
+kind: Service
+apiVersion: v1
+metadata:
+ name: outside-1
+ namespace: kafka
+spec:
+ selector:
+ app: kafka
+ kafka-broker-id: "1"
+ ports:
+ - protocol: TCP
+ targetPort: 9094
+ port: 32401
+ nodePort: 32401
+ type: NodePort \ No newline at end of file
diff --git a/outside-services/outside-2.yml b/outside-services/outside-2.yml
new file mode 100644
index 0000000..78c313c
--- /dev/null
+++ b/outside-services/outside-2.yml
@@ -0,0 +1,15 @@
+kind: Service
+apiVersion: v1
+metadata:
+ name: outside-2
+ namespace: kafka
+spec:
+ selector:
+ app: kafka
+ kafka-broker-id: "2"
+ ports:
+ - protocol: TCP
+ targetPort: 9094
+ port: 32402
+ nodePort: 32402
+ type: NodePort \ No newline at end of file
diff --git a/prod-yolean.sh b/prod-yolean.sh
deleted file mode 100755
index 88ea25c..0000000
--- a/prod-yolean.sh
+++ /dev/null
@@ -1,50 +0,0 @@
-#!/bin/bash
-# Combines addons into what we 'kubectl apply -f' to production
-set -ex
-
-ANNOTATION_PREFIX='yolean.se/kubernetes-kafka-'
-BUILD=$(basename $0)
-REMOTE=origin
-FROM="$REMOTE/"
-START=master
-
-[ ! -z "$(git status --untracked-files=no -s)" ] && echo "Working copy must be clean" && exit 1
-
-function annotate {
- key=$1
- value=$2
- file=$3
- case $(uname) in
- Darwin*)
- sed -i '' 's| annotations:| annotations:\
- --next-annotation--|' $file
- sed -i '' "s|--next-annotation--|${ANNOTATION_PREFIX}$key: '$value'|" $file
- ;;
- *)
- sed -i "s| annotations:| annotations:\n ${ANNOTATION_PREFIX}$key: '$value'|" $file
- ;;
- esac
-}
-
-git checkout ${FROM}$START
-REVS="$START:$(git rev-parse --short ${FROM}$START)"
-
-git checkout -b prod-yolean-$(date +"%Y%m%dT%H%M%S")
-
-for BRANCH in \
- addon-storage-classes \
- addon-metrics \
- addon-rest \
- addon-kube-events-topic
-do
- git merge --no-ff ${FROM}$BRANCH -m "prod-yolean merge ${FROM}$BRANCH" && \
- REVS="$REVS $BRANCH:$(git rev-parse --short ${FROM}$BRANCH)"
-done
-
-END_BRANCH_GIT=$(git rev-parse --abbrev-ref HEAD)
-
-for F in ./50kafka.yml ./zookeeper/50pzoo.yml ./zookeeper/51zoo.yml
-do
- annotate revs "$REVS" $F
- annotate build "$END_BRANCH_GIT" $F
-done
diff --git a/rbac-namespace-default/node-reader.yml b/rbac-namespace-default/node-reader.yml
new file mode 100644
index 0000000..1d0cd51
--- /dev/null
+++ b/rbac-namespace-default/node-reader.yml
@@ -0,0 +1,37 @@
+# To see if init containers need RBAC:
+#
+# $ kubectl exec kafka-0 -- cat /etc/kafka/server.properties | grep broker.rack
+# #init#broker.rack=# zone lookup failed, see -c init-config logs
+# $ kubectl logs -c init-config kafka-0
+# ++ kubectl get node some-node '-o=go-template={{index .metadata.labels "failure-domain.beta.kubernetes.io/zone"}}'
+# Error from server (Forbidden): User "system:serviceaccount:kafka:default" cannot get nodes at the cluster scope.: "Unknown user \"system:serviceaccount:kafka:default\""
+#
+---
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: node-reader
+ labels:
+ origin: github.com_Yolean_kubernetes-kafka
+rules:
+- apiGroups:
+ - ""
+ resources:
+ - nodes
+ verbs:
+ - get
+---
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: kafka-node-reader
+ labels:
+ origin: github.com_Yolean_kubernetes-kafka
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: node-reader
+subjects:
+- kind: ServiceAccount
+ name: default
+ namespace: kafka
diff --git a/test/basic-produce-consume.yml b/test/basic-produce-consume.yml
deleted file mode 100644
index fdacea0..0000000
--- a/test/basic-produce-consume.yml
+++ /dev/null
@@ -1,89 +0,0 @@
----
-kind: ConfigMap
-metadata:
- name: basic-produce-consume
- namespace: test-kafka
-apiVersion: v1
-data:
-
- setup.sh: |-
- touch /tmp/testlog
-
- ./bin/kafka-topics.sh --zookeeper $ZOOKEEPER \
- --create --if-not-exists --topic test-basic-produce-consume \
- --partitions 1 --replication-factor 1
-
- # Despite the deprecation warning --zookeeper nothing is consumed when using --bootstrap-server
- ./bin/kafka-console-consumer.sh --zookeeper $ZOOKEEPER --topic test-basic-produce-consume > /tmp/testconsumed &
-
- tail -f /tmp/testlog
-
- continue.sh: |-
- exit 0
-
- run.sh: |-
- exec >> /tmp/testlog
- exec 2>&1
-
- unique=$(date -Ins)
-
- echo "Test $unique" | ./bin/kafka-console-producer.sh --broker-list $BOOTSTRAP --topic test-basic-produce-consume
- echo ""
- tail -n 1 /tmp/testconsumed | grep $unique
-
- # How to make this test fail:
- #apt-get update && apt-get install -y --no-install-recommends procps
- #pkill java
-
- exit 0
-
----
-apiVersion: apps/v1beta1
-kind: Deployment
-metadata:
- name: basic-produce-consume
- namespace: test-kafka
-spec:
- replicas: 1
- template:
- metadata:
- labels:
- test-target: kafka
- test-type: readiness
- spec:
- containers:
- - name: testcase
- image: solsson/kafka:0.11.0.0@sha256:b27560de08d30ebf96d12e74f80afcaca503ad4ca3103e63b1fd43a2e4c976ce
- env:
- - name: BOOTSTRAP
- value: kafka-0.broker.kafka.svc.cluster.local:9092,kafka-1.broker.kafka.svc.cluster.local:9092,kafka-2.broker.kafka.svc.cluster.local:9092
- - name: ZOOKEEPER
- value: zookeeper.kafka.svc.cluster.local:2181
- # Test set up
- command:
- - /bin/bash
- - -e
- - /test/setup.sh
- # Test run, again and again
- readinessProbe:
- exec:
- command:
- - /bin/bash
- - -e
- - /test/run.sh
- # JVM start is slow, can we keep producer started and restore the default preriod 10s?
- periodSeconds: 30
- # Test quit on nonzero exit
- livenessProbe:
- exec:
- command:
- - /bin/bash
- - -e
- - /test/continue.sh
- volumeMounts:
- - name: config
- mountPath: /test
- volumes:
- - name: config
- configMap:
- name: basic-produce-consume
diff --git a/test/basic-with-kafkacat.yml b/test/basic-with-kafkacat.yml
deleted file mode 100644
index a8974e8..0000000
--- a/test/basic-with-kafkacat.yml
+++ /dev/null
@@ -1,103 +0,0 @@
----
-kind: ConfigMap
-metadata:
- name: basic-with-kafkacat
- namespace: test-kafka
-apiVersion: v1
-data:
-
- setup.sh: |-
- touch /tmp/testlog
- tail -f /tmp/testlog
-
- continue.sh: |-
- exit 0
-
- run.sh: |-
- exec >> /tmp/testlog
- exec 2>&1
-
- unique=$(date -Ins)
-
- echo "Test $unique" | kafkacat -P -b $BOOTSTRAP -t test-basic-with-kafkacat -v
- kafkacat -C -b $BOOTSTRAP -t test-basic-with-kafkacat -o -1 -e | grep $unique
-
- exit 0
-
----
-apiVersion: batch/v1
-kind: Job
-metadata:
- name: basic-with-kafkacat
- namespace: test-kafka
-spec:
- template:
- spec:
- containers:
- - name: topic-create
- image: solsson/kafka:0.11.0.0@sha256:b27560de08d30ebf96d12e74f80afcaca503ad4ca3103e63b1fd43a2e4c976ce
- command:
- - ./bin/kafka-topics.sh
- - --zookeeper
- - zookeeper.kafka.svc.cluster.local:2181
- - --create
- - --if-not-exists
- - --topic
- - test-basic-with-kafkacat
- - --partitions
- - "1"
- - --replication-factor
- - "1"
- restartPolicy: Never
----
-apiVersion: apps/v1beta1
-kind: Deployment
-metadata:
- name: basic-with-kafkacat
- namespace: test-kafka
-spec:
- replicas: 1
- template:
- metadata:
- labels:
- test-target: kafka
- test-type: readiness
- spec:
- containers:
- - name: testcase
- # common test images
- #image: solsson/curl@sha256:8b0927b81d10043e70f3e05e33e36fb9b3b0cbfcbccdb9f04fd53f67a270b874
- image: solsson/kafkacat@sha256:1266d140c52cb39bf314b6f22b6d7a01c4c9084781bc779fdfade51214a713a8
- #image: solsson/kubectl-kafkacat@sha256:3715a7ede3f168f677ee6faf311ff6887aff31f660cfeecad5d87b4f18516321
- env:
- - name: BOOTSTRAP
- #value: kafka-0.broker.kafka.svc.cluster.local:9092,kafka-1.broker.kafka.svc.cluster.local:9092,kafka-2.broker.kafka.svc.cluster.local:9092
- value: kafka-0.broker.kafka.svc.cluster.local:9092
- - name: ZOOKEEPER
- value: zookeeper.kafka.svc.cluster.local:2181
- # Test set up
- command:
- - /bin/bash
- - -e
- - /test/setup.sh
- # Test run, again and again
- readinessProbe:
- exec:
- command:
- - /bin/bash
- - -e
- - /test/run.sh
- # Test quit on nonzero exit
- livenessProbe:
- exec:
- command:
- - /bin/bash
- - -e
- - /test/continue.sh
- volumeMounts:
- - name: config
- mountPath: /test
- volumes:
- - name: config
- configMap:
- name: basic-with-kafkacat
diff --git a/yahoo-kafka-manager/kafka-manager-service.yml b/yahoo-kafka-manager/kafka-manager-service.yml
new file mode 100644
index 0000000..3d26adf
--- /dev/null
+++ b/yahoo-kafka-manager/kafka-manager-service.yml
@@ -0,0 +1,12 @@
+kind: Service
+apiVersion: v1
+metadata:
+ name: kafka-manager
+ namespace: kafka
+spec:
+ selector:
+ app: kafka-manager
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 80
diff --git a/yahoo-kafka-manager/kafka-manager.yml b/yahoo-kafka-manager/kafka-manager.yml
new file mode 100644
index 0000000..77319e4
--- /dev/null
+++ b/yahoo-kafka-manager/kafka-manager.yml
@@ -0,0 +1,26 @@
+apiVersion: apps/v1beta2
+kind: Deployment
+metadata:
+ name: kafka-manager
+ namespace: kafka
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: kafka-manager
+ template:
+ metadata:
+ labels:
+ app: kafka-manager
+ spec:
+ containers:
+ - name: kafka-manager
+ image: solsson/kafka-manager@sha256:e07b5c50b02c761b3471ebb62ede88afc0625e325fe428316e32fec7f227ff9b
+ ports:
+ - containerPort: 80
+ env:
+ - name: ZK_HOSTS
+ value: zookeeper.kafka:2181
+ command:
+ - ./bin/kafka-manager
+ - -Dhttp.port=80 \ No newline at end of file
diff --git a/zookeeper/50pzoo.yml b/zookeeper/50pzoo.yml
index f9d5c58..a57dd1f 100644
--- a/zookeeper/50pzoo.yml
+++ b/zookeeper/50pzoo.yml
@@ -1,11 +1,18 @@
-apiVersion: apps/v1beta1
+apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: pzoo
namespace: kafka
spec:
+spec:
+ selector:
+ matchLabels:
+ app: zookeeper
+ storage: persistent
serviceName: "pzoo"
replicas: 3
+ updateStrategy:
+ type: OnDelete
template:
metadata:
labels:
@@ -16,7 +23,7 @@ spec:
terminationGracePeriodSeconds: 10
initContainers:
- name: init-config
- image: solsson/kafka:0.11.0.0@sha256:b27560de08d30ebf96d12e74f80afcaca503ad4ca3103e63b1fd43a2e4c976ce
+ image: solsson/kafka:1.0.0@sha256:17fdf1637426f45c93c65826670542e36b9f3394ede1cb61885c6a4befa8f72d
command: ['/bin/bash', '/etc/kafka/init.sh']
volumeMounts:
- name: config
@@ -25,7 +32,7 @@ spec:
mountPath: /var/lib/zookeeper/data
containers:
- name: zookeeper
- image: solsson/kafka:0.11.0.0@sha256:b27560de08d30ebf96d12e74f80afcaca503ad4ca3103e63b1fd43a2e4c976ce
+ image: solsson/kafka:1.0.0@sha256:17fdf1637426f45c93c65826670542e36b9f3394ede1cb61885c6a4befa8f72d
env:
- name: KAFKA_LOG4J_OPTS
value: -Dlog4j.configuration=file:/etc/kafka/log4j.properties
@@ -43,18 +50,12 @@ spec:
requests:
cpu: 10m
memory: 100Mi
- livenessProbe:
- exec:
- command:
- - /bin/sh
- - -c
- - '[ "imok" = "$(echo ruok | nc -w 1 127.0.0.1 2181)" ]'
readinessProbe:
exec:
command:
- /bin/sh
- -c
- - '[ "imok" = "$(echo ruok | nc -w 1 127.0.0.1 2181)" ]'
+ - '[ "imok" = "$(echo ruok | nc -w 1 -q 1 127.0.0.1 2181)" ]'
volumeMounts:
- name: config
mountPath: /etc/kafka
@@ -69,6 +70,7 @@ spec:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
+ storageClassName: kafka-zookeeper
resources:
requests:
- storage: 10Gi
+ storage: 1Gi
diff --git a/zookeeper/51zoo.yml b/zookeeper/51zoo.yml
index 778567d..fdafdef 100644
--- a/zookeeper/51zoo.yml
+++ b/zookeeper/51zoo.yml
@@ -1,11 +1,17 @@
-apiVersion: apps/v1beta1
+apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: zoo
namespace: kafka
spec:
+ selector:
+ matchLabels:
+ app: zookeeper
+ storage: ephemeral
serviceName: "zoo"
replicas: 2
+ updateStrategy:
+ type: OnDelete
template:
metadata:
labels:
@@ -16,7 +22,7 @@ spec:
terminationGracePeriodSeconds: 10
initContainers:
- name: init-config
- image: solsson/kafka:0.11.0.0@sha256:b27560de08d30ebf96d12e74f80afcaca503ad4ca3103e63b1fd43a2e4c976ce
+ image: solsson/kafka:1.0.0@sha256:17fdf1637426f45c93c65826670542e36b9f3394ede1cb61885c6a4befa8f72d
command: ['/bin/bash', '/etc/kafka/init.sh']
env:
- name: ID_OFFSET
@@ -28,7 +34,7 @@ spec:
mountPath: /var/lib/zookeeper/data
containers:
- name: zookeeper
- image: solsson/kafka:0.11.0.0@sha256:b27560de08d30ebf96d12e74f80afcaca503ad4ca3103e63b1fd43a2e4c976ce
+ image: solsson/kafka:1.0.0@sha256:17fdf1637426f45c93c65826670542e36b9f3394ede1cb61885c6a4befa8f72d
env:
- name: KAFKA_LOG4J_OPTS
value: -Dlog4j.configuration=file:/etc/kafka/log4j.properties
@@ -46,18 +52,12 @@ spec:
requests:
cpu: 10m
memory: 100Mi
- livenessProbe:
- exec:
- command:
- - /bin/sh
- - -c
- - '[ "imok" = "$(echo ruok | nc -w 1 127.0.0.1 2181)" ]'
readinessProbe:
exec:
command:
- /bin/sh
- -c
- - '[ "imok" = "$(echo ruok | nc -w 1 127.0.0.1 2181)" ]'
+ - '[ "imok" = "$(echo ruok | nc -w 1 -q 1 127.0.0.1 2181)" ]'
volumeMounts:
- name: config
mountPath: /etc/kafka