Ahorra un 25 % (o incluso más) en tus costes de Kafka | Acepta el reto del ahorro con Kafka de Confluent
At Treehouse Software, when we speak with customers who are planning to modernize their enterprise mainframe systems, there’s a common theme: they are faced with decades of mission-critical and historical legacy mainframe data in disparate databases, as well as a variety of other data stores inherited through mergers, acquisitions, and other company growth scenarios. Many applications, connections, databases, and stores are located in different on-premises systems, or on multiple cloud platforms. Customers say they often find themselves in the middle of an organically formed, complex, and multi-cloud environment with little historical context, and they are trying to connect and integrate these systems as best they can. Generally, those integrations tend to be point-to-point, brittle, and tend not to scale with growth.
We’ve seen a growing interest in setting up mainframe-to-Confluent data pipelines without necessarily having a final target fully thought out. At first glance, this can come across as strange – rather like building a bridge to nowhere, as Confluent is often not considered a final datastore, but simply an event streaming platform. But in the bigger, longer-term picture, an enterprise can keep its options open by propagating data to a highly reliable, very scalable platform like Confluent that can be “subscribed to” by any number of current or yet-to-be-invented ETL toolsets and target datastores. Many firms have found success with this approach of using their event brokers as a central nervous system for business-critical data.
With that said, a site that is doing mainframe-to-Confluent propagation does ultimately need to be able to pull the data from Confluent and land it into a viable target datastore. So Treehouse Software’s current work—enabling the targeting of DynamoDB, Cosmos DB, Snowflake, and others as destinations—is seeing increased popularity among new and existing customers.
The most common Mainframe-to-Confluent use cases
Customers want to modernize applications on cloud and/or open systems without disrupting the existing critical work on legacy systems. They also want to bring together, view, and manage data from applications, databases, data warehouses, etc. that have been spread over many vastly different systems.
The Treehouse and Confluent Solution: Avoid replicating the same complexity to newer systems
Confluent allows customers to eliminate the need for point-to-point interconnections and replace brittle point-to-point interconnections with a real-time, global data plane that connects all of the systems, applications, datastores, and environments that make up an enterprise. That’s possible regardless of whether systems are running on-prem, in the cloud, or a combination of both.
Greg DeMichillie, Vice President of Product and Solutions Marketing at Confluent discusses transitioning to a hybrid or multicloud architecture:
For those customers looking to move mainframe data to Confluent, Treehouse Software’s tcVISION is the mainframe data connector that performs real-time synchronization of data sources to Confluent Platform or Cloud, allowing for rapid data movement to newer data sinks/target platforms on AWS, Azure, Google Cloud, and other services.
Additionally, tcVISION supports many mainframe data sources for both online and offline scenarios. Data can be replicated from IBM DB2 z/OS, DB2 z/VSE, VSAM, IMS/DB, CA IDMS, CA DATACOM, or Software AG ADABAS. tcVISION can replicate data to many targets including Confluent Platform or Cloud. To learn more, see the complete list of supported tcVISION sources and targets. Here’s a look at the architecture that’s created with Confluent and tcVISION:
Learn more in this post on enterprise change data capture (CDC) to Kafka with tcVISION and Confluent.
With tcVISION’s groundbreaking mainframe CDC connector and Confluent’s ability to serve as the multi-tenant data hub, it’s possible to aggregate data from multiple sources and have data published into various Kafka topics. End-to-end data in motion under a simplified hybrid and multicloud architecture enables enterprise customers to oversee data pipelines as well as manage policy and governance.
Users with business-critical data locked in mainframes want to exploit this data by using Confluent to deploy in either a public cloud, multicloud, on-premises, or hybrid infrastructure. In collaboration with […]
This blog explores how cloud service providers (CSPs) and managed service providers (MSPs) increasingly recognize the advantages of leveraging Confluent to deliver fully managed Kafka services to their clients. Confluent enables these service providers to deliver higher value offerings to wider...