Ahorra un 25 % (o incluso más) en tus costes de Kafka | Acepta el reto del ahorro con Kafka de Confluent

Confluent Supercharges Apache Flink® Offering with New Developer Tools and Enterprise-Ready Security

Confluent adds Table API support for Apache Flink® making it even easier for developers to use Java or Python to build streaming applications

New private networking and encryption features safeguard data streams for enterprises in highly regulated industries

AUSTIN, Texas – Sept. 17, 2024 – Confluent, Inc. (NASDAQ:CFLT), the data streaming pioneer, introduced new capabilities to Confluent Cloud to make stream processing and data streaming more accessible and secure. Confluent’s new support of Table API makes Apache Flink® available to Java and Python developers; Confluent’s private networking for Flink provides enterprise-level protection for use cases with sensitive data; Confluent Extension for Visual Studio Code accelerates the development of real-time use cases; and Client-Side Field Level Encryption encrypts sensitive data for stronger security and privacy.

“The true strength of using Apache Flink for stream processing empowers developers to create applications that instantly analyze and respond to real-time data, significantly enhancing responsiveness and user experience,” said Stewart Bond, Research Vice President at IDC. “Managed Apache Flink solutions can eliminate the complexities of infrastructure management while saving time and resources. Businesses must look for a Flink solution that seamlessly integrates with the tools, programming languages, and data formats they’re already using for easy implementation into business workflows.”

More businesses are relying on stream processing to build real-time applications and pipelines for various use cases spanning machine learning, predictive maintenance, personalized recommendations and fraud detection. Stream processing lets organizations blend and enrich their data with information across their business. Apache Flink is the de facto standard for stream processing. However, many teams hit roadblocks with Flink because it’s operationally complex, difficult to secure, and has expensive infrastructure and management costs.

“Thousands of teams worldwide use Apache Flink as their trusted stream processing solution to deliver exceptional customer experiences and streamline operations by shifting processing closer to the source, where data is fresh and clean,” said Shaun Clowes, Chief Product Officer at Confluent. “Our latest innovations push the boundaries further, making it easier for developers of all skill levels to harness this powerful technology for even more mission-critical and complex use cases.”

Confluent’s support for Table API extends Flink to teams with Java or Python experience

Confluent Cloud for Apache Flink offers the SQL API, a powerful and user-friendly tool for processing data streams. While Flink SQL is effective for quickly writing and executing queries, some teams favor programming languages like Java or Python which allow for more control of their applications and data. This can be especially important when developing complex business logic or custom processing tasks.

Adding support for the Table API to Confluent Cloud for Apache Flink enables Java or Python developers to easily create streaming applications using familiar tools. By supporting both Flink SQL and the Table API, Confluent Cloud for Apache Flink lets developers choose the best language for their use cases.

Support for Table API enables companies to:

  • Enhance language flexibility by enabling developers to use their preferred programming languages, taking advantage of language-specific features and custom operations.
  • Streamline the coding process by leveraging customers’ integrated development environment (IDE) of choice featuring auto-completion, refactoring tools, and compile-time checks to ensure higher code quality and minimize runtime issues.
  • Make debugging easier with an iterative approach to data processing and streamlined CI/CD integration.

“We’re reimagining financial services and it’s imperative that we adopt new technologies that not only protect our customers but are also easily used by our teams,” said Shujahat Bashir, Director of Software Engineering at Thrivent. “Data streaming and stream processing can help power real-time fraud detection, and payments processing for an exceptional customer experience. We are looking forward to using Confluent’s new Apache Flink features including Private Networking, Flexible Schema Management, and Table APIs to extend the power of Flink to the languages and development styles we already use today.”

Support for Table API is available in open preview and is available for testing and experimentation purposes. General availability is coming soon.

Enabling private networking for Flink provides a secure environment for data streaming workloads

With more data than ever before and more teams using cloud and hybrid solutions, private networking is essential for protecting against unauthorized access and cyber threats. Confluent Cloud now offers private networking support for Flink, providing a critical layer of security for businesses that need to process data within strict regulatory environments.

By enabling private networking for Flink, Confluent users can:

  • Boost data security and privacy between Flink and Kafka by safeguarding in-transit data and ensuring secure connections between clients and Flink within a private network.
  • Simplify secure network configuration, making it easier to set up private connections without requiring extensive networking expertise.
  • Facilitate flexible and secure stream processing by seamlessly joining and processing data across different Kafka clusters, ensuring data accessibility while adhering to strict security protocols.

Private networking support is generally available on AWS for Enterprise and Dedicated clusters. Additional cloud platforms are coming soon.

Confluent Extension for Visual Studio Code streamlines workloads and accelerates development cycles

Teams working with real-time data platforms like open source Apache Kafka often struggle with fragmented tools, clunky workflows, and constant switching between environments and other interfaces. This disjointed experience makes it tough to integrate real-time data into applications, slowing down productivity and innovation. Integrated development environments can make this process much easier and 54% of developers globally use Visual Studio Code (VS Code), the most popular choice for code editing. Confluent Extension for Visual Studio Code simplifies the development process by integrating Confluent directly into teams’ preferred integrated development environment.

Confluent Extension for Visual Studio Code enables teams to:

  • Streamline topic management to easily create, edit, and browse Kafka topics with intuitive tools that simplify debugging and boost efficiency.
  • Code and debug in one place by writing, executing, and debugging Kafka clients, Flink queries, and streaming applications directly in VS Code with enhanced productivity features like code completion.
  • Seamlessly manage cloud resources to provision and control Confluent Cloud clusters within VS Code, reducing complexity and streamlining cloud operations.

Confluent Extension for Visual Studio Code supports Kafka clients and is available in early access. New Flink capabilities and general availability will be available later in 2024.

Client-Side Field Level Encryption safeguards teams’ most sensitive data while meeting regulatory requirements

Data security and privacy are top priorities for organizations in regulated industries, including financial services, healthcare, and the public sector. These organizations are subject to compliance rules about how sensitive data, like personally identifiable information (PII), can be accessed, moved, and stored.

Client-Side Field Level Encryption helps teams protect their most sensitive data by enabling them to encrypt individual fields within their data streams for enhanced security and compliance. Complementing existing Confluent security features, Client-Side Field Level Encryption further reduces the risk of unwanted access by encrypting data on the client side so that even system admins and users with highly privileged access cannot view messages in plaintext.

With Client-Side Field Level Encryption teams can:

  • Improve security of sensitive data and adhere to strict compliance requirements.
  • Maintain flexible and granular access control of which specific fields to encrypt.
  • Lower total cost of ownership and operational complexity by reducing the need for topic duplication.

Client-Side Field Level Encryption is currently in limited availability and is expected to become generally available soon for all Confluent Cloud users through the company’s Stream Governance suite.

Additional resources

As our roadmap may change in the future, the features referred to herein may change, may not be delivered on time, or may not be delivered at all. This information is not a commitment to deliver any functionality and customers should make their purchasing decisions based upon features that are currently available.

Confluent is the data streaming platform that is pioneering a fundamentally new category of data infrastructure that sets data in motion. Confluent’s cloud-native offering is the foundational platform for data in motion—designed to be the intelligent connective tissue enabling real-time data, from multiple sources, to constantly stream across the organization. With Confluent, organizations can meet the new business imperative of delivering rich, digital front-end customer experiences and transitioning to sophisticated, real-time, software-driven back-end operations. To learn more, please visit www.confluent.io.

Confluent® and associated marks are trademarks or registered trademarks of Confluent, Inc.

Apache® and Apache Kafka® are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. No endorsement by the Apache Software Foundation is implied by the use of these marks. All other trademarks are the property of their respective owners.