Ahorra un 25 % (o incluso más) en tus costes de Kafka | Acepta el reto del ahorro con Kafka de Confluent
At Confluent we’re committed to building the world's leading data streaming platform that's cloud-native, complete, and available everywhere your data and applications reside. We offer this data streaming platform as a fully managed service in the cloud—Confluent Cloud; as a self-managed software that runs in your own environments—Confluent Platform; or as a hybrid of each of these.
What are the four key capabilities of a data streaming platform? A data streaming platform is a software platform that gives you the ability to stream, connect, process, and govern all your data, and makes it available wherever it’s needed, however, it’s needed, in real time. For every launch, we describe all of our newest innovations packaged up against these four key capabilities.
Today, we're sharing some exciting updates for Confluent Platform.
We're excited to announce the release of Confluent Platform 7.7.
This release builds upon Apache Kafka® 3.7, reinforcing our core capabilities as a data streaming platform. Below are the release highlights, and you can find additional details about the features in the release notes.
New Key Capabilities:
Enhance security while reducing operational burden (General Availability) by managing application identities and credentials through your own OIDC identity provider with OAuth
Leverage Confluent Platform for Apache Flink® (Limited Availability) to access long-term support for on-prem or private cloud stream processing workloads
Increase scalability and simplify your architecture with seamless migrations for existing clusters from ZooKeeper to KRaft using Confluent for Kubernetes
Many organizations choose to have their data on-premises for compliance and security reasons. As a leader in data streaming, we’re committed to making Confluent Platform the most secure way to stream data across on-premises, or hybrid environments.
Confluent has gone beyond just being an on-premises Kafka solution. We’ve built a platform that is industry-compliant and trusted by default, adhering to all major security standards to protect your data. We’ve also built in additional enterprise-grade security capabilities to maintain the confidentiality of critical information, traceability of user actions, and secure access to resources, with scalability and standardization.
Today we’re taking another stride towards building the most trusted data streaming platform with OAuth/OIDC support for on-premises and hybrid workloads in production.
OAuth/OIDC for Confluent Platform, including management support using Confluent for Kubernetes and Confluent Platform Ansible, is now generally available for production workloads. OAuth is an industry standard for providing authentication, allowing you to access your resources and data without sharing or storing credentials. With this release, customers can now bring their own identity provider (Microsoft Entra ID, Okta, etc.) and enable OAuth/OIDC for authentication across Confluent Platform.
By integrating OAuth and OIDC into your Confluent Platform on-premises environment, you can:
Enhance security while reducing operational burden by managing application identities and credentials through your own OIDC identity provider
Streamline authentication with one source of identity by bringing your identity provider, and mapping groups to your RBAC or ACL
Maintain compliance without sacrificing efficiency by authenticating with industry-leading standards
Join us on August 13, 2024, for a webinar and demo to see some of the latest security tools, including OAuth support, that have made hybrid and on-premise data streaming more secure and resilient than ever.
Earlier this year, we announced Confluent Cloud for Apache Flink® to enable simple, serverless stream processing. Now, we’re excited to complement our Flink offering by adding stream processing for self-managed workloads with Confluent Platform for Apache Flink.
Among stream processing frameworks, Flink has emerged as the de facto standard because of its performance and rich feature set. However, similar to self-managing other open-source tools like Kafka, self-managing Flink can be challenging due to its operational complexity, steep learning curve, and high costs for in-house support.
Relying on Flink user communities for support lacks the responsiveness and tailored assistance required for mission-critical applications. Additionally, the community maintains only the two most recent releases, without offering long-term support for specific versions. For customers seeking vendor support for Flink, having separate vendors for Flink and Kafka may require coordination to resolve issues involving both technologies, potentially leading to delays and confusion.
With Confluent Platform for Apache Flink, a Flink distribution supported by Confluent, customers can now rely on Confluent for long-term support for their on-premises or private-cloud Flink workloads, extending beyond what the open-source community offers.
Our opinionated enterprise-grade Flink distribution enables you to:
Minimize risk with consolidated Flink and Kafka support and expert guidance from the foremost experts in the data streaming industry
Receive timely assistance in troubleshooting and resolving issues, reducing the impact of any operational disruptions
Maintain secure and up-to-date stream processing applications with off-cycle bug and vulnerability fixes
Rather than only maintaining the two most recent releases, we will provide, subject to the purchase of an applicable support policy, three years of support for each Flink release from its launch, starting with Flink 1.18 and Flink 1.19. Our comprehensive support SLA offers swift resolution for critical SEV 1 issues to minor SEV 3 concerns, providing uninterrupted operations and peace of mind. The security approach includes continuous scanning for vulnerabilities, quarterly patches, and rapid hotfixes for critical issues, assuring that your systems are always protected.
By consolidating support for both Flink and Kafka with a single vendor, you can streamline the support process, maintain better integration and compatibility between the two technologies, and receive more comprehensive support for your entire streaming project.
Confluent Platform for Apache Flink, currently in Limited Availability, is a drop-in replacement for open-source Apache Flink with minimal changes to your existing Flink jobs and architecture. In the future, we plan on introducing enhancements to deepen the integration between Flink and Confluent Platform.
In 2022, we released our fully managed HTTP Source connector for Confluent Cloud, which has since become one of our most popular connectors to integrate SaaS apps and internal microservices. We’re excited to extend this to Confluent Platform users with our self-managed HTTP Source connector. On par with the fully managed version, this connector guarantees that records are delivered at least once to the Kafka topic and supports multiple tasks.
The self-managed HTTP Source connector ingests data from external APIs and produces it to Kafka topics for real-time processing. Popular API sources include Coinbase, Stripe Events, Twilio, and Twitter.
This connector is now available for download on Confluent Hub. You can install this connector by using the confluent connect plugin install
command or by manually downloading the ZIP file.
Migrate from ZooKeeper to KRaft using Confluent for Kubernetes
Starting with the 2.9 release, Confluent for Kubernetes (CFK) has supported migration from a ZooKeeper-based Confluent Platform deployment to a KRaft-based deployment.
With the 2.9 release, CFK added support for Multi-Region Cluster deployment with KRaft.
Please note that ZooKeeper has been marked as deprecated since the 3.5.0 release and is planned to be removed in Apache Kafka 4.0. Learn more about why ZooKeeper is being replaced with KRaft in this blog post.
OAuth/OIDC Management
With the 2.9 release, Confluent for Kubernetes began support for adding OAuth capabilities to Confluent Server, Schema Registry, Connect, and RestProxy, as well as Single Sign-On capabilities to Confluent Control Center and Confluent CLI.
FIPS-enabled mode
With this release of Confluent Platform, Confluent for Kubernetes introduces support for running Confluent Operator in FIPS-enabled mode. Additionally, CFK now supports deploying Connect, ksqlDB, and Schema Registry in FIPS-enabled mode.
For the full details on the latest in Confluent for Kubernetes, check out the release notes.
Confluent Platform 7.7 is built on Apache Kafka version 3.7. For more details about Apache Kafka 3.7, please read the blog post or check out the release video below with Danica Fine.
Join us on August 13, 2024, to see some of the latest innovations that have made hybrid and on-premise data streaming more secure and resilient than ever.
Download Confluent Platform 7.7 today to get started with the only cloud-native and comprehensive platform for data in motion, built by the original creators of Apache Kafka.
The preceding outlines our general product direction and is not a commitment to deliver any material, code, or functionality. The development, release, timing, and pricing of any features or functionality described may change. Customers should make their purchase decisions based on services, features, and functions that are currently available.
Confluent and associated marks are trademarks or registered trademarks of Confluent, Inc.
Apache®, Apache Flink®, and Apache Kafka® are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. No endorsement by the Apache Software Foundation is implied by the use of these marks. All other trademarks are the property of their respective owners
Apache Kafka 3.7 introduces updates to the Consumer rebalance protocol, an official Apache Kafka Docker image, JBOD support in Kraft-based clusters, and more!
We covered so much at Current 2024, from the 138 breakout sessions, lightning talks, and meetups on the expo floor to what happened on the main stage. If you heard any snippets or saw quotes from the Day 2 keynote, then you already know what I told the room: We are all data streaming engineers now.