Ahorra un 25 % (o incluso más) en tus costes de Kafka | Acepta el reto del ahorro con Kafka de Confluent
Change data capture is a popular method to connect database tables to data streams, but it comes with drawbacks. The next evolution of the CDC pattern, first-class data products, provide resilient pipelines that support both real-time and batch processing while isolating upstream systems...
Learn how the latest innovations in Kora enable us to introduce new Confluent Cloud Freight clusters, which can save you up to 90% at GBps+ scale. Confluent Cloud Freight clusters are now available in Early Access.
Learn how to contribute to open source Apache Kafka by writing Kafka Improvement Proposals (KIPs) that solve problems and add features! Read on for real examples.
Confluent Platform 7.4 now includes SBOMs, which gives customers more transparency and control over their software deployments.
GitOps can work with policy-as-code systems to provide a true self-service model for managing Confluent resources. Policy-as-code is the practice of permitting or preventing actions based on rules and conditions defined in code. In the context of GitOps for Confluent, suitable policies...
Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database service that is highly available and scalable. It is designed to deliver single-digit millisecond query performance at any scale. It offers a fast and flexible way to store...
Announcing the latest updates to Confluent’s cloud-native data streaming platform: Kora Engine, Data Quality Rules, Custom Connectors, Streaming Sharing, and more.
Take a tour of the internals of Confluent’s Apache Kafka® service, powered by Kora: the next-generation, cloud-native streaming engine.
Why do our customers choose Confluent as their trusted data streaming platform? In this blog, we will explore our platform’s reliability, durability, scalability, and security by presenting some remarkable statistics and providing insights into our engineering capabilities.
The blog introduces Confluent Platform 7.4 and its key features, including enhancing scalability, increasing architectural simplicity, accelerating time to market, reducing ops burden, and ensuring high-quality data streams. It also covers what's new in Apache Kafka 3.4.
Use the Confluent CLI and API to create Stream Designer pipelines from SQL source code.
This post details how to minimize internal messaging within Confluent platform clusters. Service mesh and containerized applications have popularized the idea of control and data planes. This post applies it to the Confluent platform clusters and highlights its use in Confluent Cloud.
Announcing the latest updates to Confluent’s cloud-native data streaming platform, centralized identity management, enhanced RBAC, Client Quotas, and more.
Confluent is pleased to announce that the Confluent CLI—the leading command-line tool for managing enterprise Kafka deployments and modern data flow—is now source available under the Confluent Community License.
Building data streaming applications, and growing them beyond a single team is challenging. Data silos develop easily and can be difficult to solve. The tools provided by Confluent’s Stream Governance platform can help break down those walls and make your data accessible to those who need it.
Self-managing connectors come with major time and resource challenges and taking on unnecessary risks of downtime that shift your team’s focus away from working on more strategic projects and innovations...
Setting up proactive, synthetic monitoring is critical for complex, distributed systems like Apache Kafka®, especially when deployed on Kubernetes and where the end-user experience is concerned, and is paramount for healthy real-time data pipelines...