Ahorra un 25 % (o incluso más) en tus costes de Kafka | Acepta el reto del ahorro con Kafka de Confluent

Introducing the Confluent Operator: Apache Kafka on Kubernetes Made Simple

Escrito por

At Confluent, our mission is to put a Streaming Platform at the heart of every digital company in the world. This means, making it easy to deploy and use Apache Kafka and Confluent Platform—the de-facto Streaming Platform—across a variety of infrastructure environments. In the last few years, the rise of Kubernetes as the common operations runtime across a myriad of platforms in the enterprise can’t be overlooked. So today, I’m excited to announce the Confluent Operator: the best way to deploy Apache Kafka and Confluent Platform on Kubernetes. This includes a rollout of pre-built Docker images for the entire Confluent Platform, Kubernetes deployment templates, a reference architecture that enables users to deploy Kafka and Confluent Platform on Kubernetes and an implementation of the Kubernetes Operator API for automated provisioning, management, and operations of Apache Kafka on Kubernetes.

Kafka + Kubernetes expertise = The Confluent Operator

Over the last few years, we’ve accumulated a large amount of operational experience as part of supporting Kafka in production for hundreds of companies worldwide. Not only does that experience apply to on-premises deployments of Kafka, but we’ve also gathered a ton of Kubernetes experience as part of running Confluent Cloud for over a year: a 24×7 fully-managed and hosted Apache Kafka as a service.

Through this experience, we realized that the right product to run Kafka on Kubernetes would need to combine both Kafka and Kubernetes expertise, and the Confluent Operator does exactly that. It productizes years of Kafka experience with Kubernetes expertise to offer our users the best way of using Apache Kafka on Kubernetes.

As part of this effort, we’ve collaborated with an ecosystem of Kubernetes partners to design and build the Confluent Operator. The Confluent Operator includes an implementation of the Kubernetes Operator API that provides deployment and management automation for Kafka and the Confluent Platform on Kubernetes.

The Confluent Operator uses official Confluent Platform Docker images that have been tested and are production-ready. The Confluent Operator will support the popular Kubernetes distributions, including Pivotal Container Service (PKS), Heptio Kubernetes Subscription, Mesosphere Kubernetes and Red Hat OpenShift; as well as Google Kubernetes Engine (GKE), Amazon Elastic Container Service for Kubernetes (EKS), and Azure Container Service (AKS) as these integrations become available.

Kafka on Kubernetes: the hard parts

Managing stateful applications such as Kafka, ZooKeeper, or databases running on Kubernetes is notoriously difficult. There are several challenges to overcome: maintaining a stable identity for each node in a cluster, retaining a node’s state across failures, restoring the state machine of the application back to a normal state after every failure, and more.

To overcome some of these challenges, the Kubernetes community introduced the StatefulSets and Persistent Volumes abstractions. StatefulSets provide a stable identity to each pod in the form of an ordinal index, stable network endpoints for clients to reach pods, and stable  Persistent Volumes that are always mounted for the pod, even across failures.

Although these Kubernetes primitives are great building blocks, they still leave a large burden on users to set them all up correctly, and in their general form, they don’t understand the specifics of the application enough to manage the entire application lifecycle. For Kafka, these are things like rolling cluster restarts, data balancing, and configuration for accessing Kafka from both inside and outside of the Kubernetes cluster.

We can fill the gap using two of the powerful extensibility features of Kubernetes: Custom Resource Definitions and Operators. Custom Resource Definitions allow you to define your own objects that become part of the Kubernetes API. You can then specify the desired state that those object types should be in for correct operation of the stateful application. A Kubernetes Operator is a domain-specific controller that actively makes the actual state of the application match that defined desired state. It does that by watching for changes in desired state (such as new clusters to be provisioned, updates to be applied, or changes in cluster size) as well as changes in actual state (such as a pod failing or progress of safe rolling upgrade) and taking the appropriate action accordingly. For example, the user can tell the system the equivalent of “I want a three-node ZooKeeper cluster with SSD storage and a six-node Kafka cluster with 16TB HDD storage per node.”

These abstractions are powerful, but to fully leverage their power you still need to handle nuances specific to the stateful application under consideration. For instance, here are some things specific to Kafka and need special care when deploying Kafka on Kubernetes.

Managing configuration for Kafka in Kubernetes is involved

Being able to deploy Kafka clusters means getting configuration right. Kubernetes ConfigMaps are a clean way to expose configuration to a service. For Kafka on Kubernetes, this means creating ConfigMaps with the right config values for the environment in which you’re deploying your Kafka clusters, and having the Kafka pods read their configuration from these ConfigMaps. Also, configuration management for Kafka clusters needs to be done carefully; some configuration values are pod specific such, as broker.id, broker.rack, and advertised.listeners; while some configuration values are common to all pods, such as zookeeper.connect, log.dirs, listeners, and any replication-related configuration.

The ordinal index that the Kubernetes StatefulSet assigns to each Kafka pod must be propagated to be its broker id, and if you want to ensure that your Kafka cluster is deployed evenly across multiple racks or availability zones, then at configuration time, racks must be assigned appropriately to Kafka pods.

Also, if you want to enable any of the SASL authentication mechanisms, then the appropriate ConfigMaps need to be created for log4j, JVM, and JAAS configurations. And, from the experience of running Kafka clusters on Kubernetes in production for a while, we’ve also learned that Kafka pod configuration fields like Memory, CPU, and Disks must also be validated.

It is not over yet—all these nuances in configuration need to be applied correctly as well. Many configuration values in Kafka are static, which means if we want to change configuration to take effect, you need to reboot the pods.

Rolling restarts and upgrades need special care in Kafka

Any Kafka version changes, updates to some broker configurations, or cluster maintenance means restarting all brokers in the Kafka cluster. However, this needs to be done in a rolling fashion in order to keep the cluster’s partitions available throughout the process. For Kafka, safe rolling restarts means doing several things:

  • Ensuring that there are no under-replicated partitions and the cluster is healthy (pods are Running and Ready).
  • Gracefully restarting one broker at a time, waiting again for the under replication partition count to drop to 0.
  • Bouncing the next Kafka node, and also waiting for a newly restarted broker to catch up to the leader so leader failover can happen without data loss.

Shutting down a Kafka broker must also be done gracefully whenever possible so that when the new Kafka pod for the same broker is brought up, the log recovery time is short, the leadership transfer happens quickly, and the partition unavailability window is reduced.

At times, rolling restarts need to be done twice. An example of this is the upgrade to a version of Kafka that includes changes to the on-disk data format and/or the inter-broker protocol, as in Kafka 1.0.x.

Scaling Kafka up or down requires data balancing

Adding nodes to a Kafka cluster requires manually assigning some partitions to the new brokers so that load is evenly spread across the expanded Kafka cluster. The StatefulSet abstraction in Kubernetes makes this somewhat easier to do, but still, special care is needed while scaling the Kafka pods to either add or remove a Kafka pod to the cluster. First, this involves creating or deleting respective ConfigMap objects with the right pod-specific configuration, and second, since the newly added pods will not be automatically assigned any topic partitions, the data balancing tooling needs to be run to allocate some partitions to the new pods or to delete a pod.

Now that we’ve covered the complexity of running a stateful data service like Kafka in Kubernetes, let’s dive into The Confluent Operator in a little more detail.

What does the Confluent Operator do for you?

The Confluent Operator directly addresses the challenges of running Apache Kafka on Kubernetes, and will offer the following features across all Confluent Platform components:

  1. Automated Provisioning
    • Configuration for Confluent Platform clusters to achieve zero-touch provisioning.
    • Deployment of clusters across multiple racks or availability zones.
    • Integration with Persistent Volume Claims to store data either on local disk or network attached storage.
  2. Cluster Management and Operations
    • Automated rolling update of the Confluent Platform clusters after either a Confluent Platform version, configuration, or resource update
    • Elastic scaling of Kafka clusters up or down by updating cluster configuration.
    • Automated data balancing to distribute replicas evenly across all brokers in a Kafka cluster, after new brokers are added to the cluster during a scale up operation, as well as, before existing brokers are deleted from the cluster during a scale down operation.
  3. Resiliency
    • Restoration of a Kafka node to a pod with the same broker id, configuration, and Persistent volumes when a Kafka pod dies
  4. Monitoring
    • End-to-end data completeness SLA monitoring with Control Center
    • Exposes Prometheus metrics for additional alerting and monitoring

How can I use the Confluent Operator?

In the next month, we’ll release a reference architecture along with a set of deployment templates and Confluent Platform docker images. You can use these to deploy Apache Kafka and Confluent Platform on Kubernetes.

By the middle of this year, we will start early access for the Kubernetes Operator. Sign up to let us know that you are interested!

Conclusion

With the Confluent Operator, we are productizing years of Kafka experience with Kubernetes expertise to offer our users the best way of using Apache Kafka on Kubernetes. Our goal in this is to make streaming data ubiquitous: Kubernetes lets you run your apps and services anywhere, Kafka enables you to make your data accessible instantaneously, anywhere.

Let us know if you are interested. We look forward to helping you run a Streaming Platform on Kubernetes!

To learn more, watch the online talk Stateful, Stateless and Serverless – Running Apache Kafka on Kubernetes, featuring Joe Beda, Heptio CTO and Kubernetes Co-creator, and Gwen Shapira, Principal Data Architect at Confluent.

  • Neha Narkhede is the co-founder at Confluent, a company backing the popular Apache Kafka messaging system. Prior to founding Confluent, Neha led streams infrastructure at LinkedIn, where she was responsible for LinkedIn’s streaming infrastructure built on top of Apache Kafka and Apache Samza. She is one of the initial authors of Apache Kafka and a committer and PMC member on the project.

¿Te ha gustado esta publicación? Compártela ahora