aboutsummaryrefslogtreecommitdiff

Manifests here require Kubernetes 1.8 now. On earlier versions use v2.1.0.

Kafka on Kubernetes

Transparent Kafka setup that you can grow with. Good for both experiments and production.

How to use: * Good to know: you'll likely want to fork this repo. It prioritizes clarity over configurability, using plain manifests and .propeties files; no client side logic. * Run a Kubernetes cluster, minikube or real. * Quickstart: use the kubectl applys below. * Have a look at addons, or the official forks: - kubernetes-kafka-small for single-node clusters like Minikube. - StreamingMicroservicesPlatform Like Confluent's platform quickstart but for Kubernetes. * Join the discussion in issues and PRs.

No readable readme can properly introduce both Kafka and Kubernetes, but we think the combination of the two is a great backbone for microservices. Back when we read Newman we were beginners with both. Now we've read Kleppmann, Confluent and SRE and enjoy this "Streaming Platform" lock-in :smile:.

We also think the plain-yaml approach of this project is easier to understand and evolve than helm charts.

What you get

Keep an eye on kubectl --namespace kafka get pods -w.

The goal is to provide Bootstrap servers: kafka-0.broker.kafka.svc.cluster.local:9092,kafka-1.broker.kafka.svc.cluster.local:9092,kafka-2.broker.kafka.svc.cluster.local:9092 `

Zookeeper at zookeeper.kafka.svc.cluster.local:2181.

Prepare storage classes

For Minikube run kubectl apply -f configure/minikube-storageclass-broker.yml; kubectl apply -f configure/minikube-storageclass-zookeeper.yml.

There's a similar setup for AKS under configure/aks-* and for GKE under configure/gke-*. You might want to tweak it before creating.

Start Zookeeper

The Kafka book recommends that Kafka has its own Zookeeper cluster with at least 5 instances.

kubectl apply -f ./zookeeper/

To support automatic migration in the face of availability zone unavailability we mix persistent and ephemeral storage.

Start Kafka

kubectl apply -f ./kafka/

You might want to verify in logs that Kafka found its own DNS name(s) correctly. Look for records like:

kubectl -n kafka logs kafka-0 | grep "Registered broker"
# INFO Registered broker 0 at path /brokers/ids/0 with addresses: PLAINTEXT -> EndPoint(kafka-0.broker.kafka.svc.cluster.local,9092,PLAINTEXT)

That's it. Just add business value :wink:.

RBAC

For clusters that enforce RBAC there's a minimal set of policies in

kubectl apply -f rbac-namespace-default/

Tests

Tests are based on the kube-test concept. Like the rest of this repo they have kubectl as the only local dependency.

Run self-tests or not. They do generate some load, but indicate if the platform is working or not. * To include tests, replace apply -f with apply -R -f in your kubectls above. * Anything that isn't READY in kubectl get pods -l test-type=readiness --namespace=test-kafka is a failed test.