Ahorra un 25 % (o incluso más) en tus costes de Kafka | Acepta el reto del ahorro con Kafka de Confluent
Change data capture is a popular method to connect database tables to data streams, but it comes with drawbacks. The next evolution of the CDC pattern, first-class data products, provide resilient pipelines that support both real-time and batch processing while isolating upstream systems...
Learn how the latest innovations in Kora enable us to introduce new Confluent Cloud Freight clusters, which can save you up to 90% at GBps+ scale. Confluent Cloud Freight clusters are now available in Early Access.
Learn how to contribute to open source Apache Kafka by writing Kafka Improvement Proposals (KIPs) that solve problems and add features! Read on for real examples.
Today, we are excited to announce that Amazon EventBridge has joined the Connect with Confluent program, marking it the third AWS service to become part of this collaborative effort.
Learn how the latest innovations in Kora enable us to introduce new Confluent Cloud Freight clusters, which can save you up to 90% at GBps+ scale. Confluent Cloud Freight clusters are now available in Early Access.
Learn how to create a docker-compose.yml file for Kafka using Kafka Docker Composer. Set up clusters, perform failover testing, and more with this guide.
Learn about the bits and bytes of what happens behind the scenes in the Apache Kafka producer and consumer clients when communicating with the Schema Registry and serializing and deserializing messages.
The rise of fully managed cloud services fundamentally changed the technology landscape and introduced benefits like increased flexibility, accelerated deployment, and reduced downtime. Confluent offers a portfolio of fully managed...
Learn how to track events in a large codebase, GitHub in this example, using Apache Kafka and Kafka Streams.
Learn best practices for using Confluent Schema Registry, including understanding subjects and versions, pre-registering schemas, using data contracts, and more.
Learn about our vision for Tableflow, a new feature in private early access that makes it push-button simple to take Apache Kafka® data and feed it directly into your data lake, warehouse, or analytics engine as Apache Iceberg® tables.
We're thrilled to announce the general availability of Confluent Cloud for Apache Flink across all three major clouds. This means that you can now experience Kafka and Flink as a unified, enterprise-grade platform to connect and process your data in real time.
We’re excited to share the latest and greatest features on Confluent Cloud, in our first launch of 2024. Learn more about our latest updates including serverless Apache Flink, exciting pricing changes, updates to connectors, and more!
Check out all the highlights from the Apache Flink® 1.19 release!
Apache Kafka 3.7 introduces updates to the Consumer rebalance protocol, an official Apache Kafka Docker image, JBOD support in Kraft-based clusters, and more!
Several key new features have been added to Confluent Cloud for Apache Flink this year including Topic Actions, Terraform support, and expansion into GCP and Azure. Let's take a look at these enhancements and how they empower users to harness the full potential of streaming data.
Change data capture is a popular method to connect database tables to data streams, but it comes with drawbacks. The next evolution of the CDC pattern, first-class data products, provide resilient pipelines that support both real-time and batch processing while isolating upstream systems...