Ahorra un 25 % (o incluso más) en tus costes de Kafka | Acepta el reto del ahorro con Kafka de Confluent
Boston is a city of many firsts. The first public park, the first public school, the first UFO sighting in America. And, we just added one more to the list: The first stop in North America for our Data in Motion Tour this year.
Many Kafka enthusiasts, including me, braved a nor’easter to take part in the event and learn how Confluent has reinvented Apache Kafka® to meet the modern business demands in the data streaming era. Data practitioners came away with tips on meeting business goals with streaming data, connecting customers to real-time insights, and best practices for fueling real-time use cases.
My key takeaways? Forward-thinking businesses are:
Using real-time data as a differentiator
Focusing on shifting key resources to higher-value tasks instead of having them spend time on managing their data infrastructure
Relying on fully managed services to build a highly scalable and resilient data infrastructure
Banking on data streaming to propel their modernization journey, create unified customer experiences, and much more
To kick it off, Confluent Field CTO William LaForest, briefed the attendees on why doing data streaming at scale means you need more than Kafka.
“If you have an organization running Apache Kafka across lots of businesses and projects, it changes the way you need to run it—so you get real value out of it,” LaForest said.
He walked the audience through why getting started with Kafka is pretty easy, but managing a complicated distributed system in production is actually much harder. It comes with a lot of costs and complexity, including cluster sizing, cluster provisioning, load rebalancing, updates, and security patches, that businesses don’t want to be spending their time on.
[#shamelessplug: We wrote a whole ebook explaining the why and how of Confluent Cloud being so much more than Kafka, in case you want to take a peek.]
But don’t just take our word for it. Here’s what Manoj Vasudev, solution architect at Clean Harbors and a Data in Motion Tour first-timer, had to say:
“It’s the ease of use of fully managed Kafka over self-managed Kafka that drove my company to embrace Confluent,” Vasudev told me. “We spent some time doing POC on open source Kafka. It was not that challenging, but it was a time-consuming exercise. We quickly realized we will run into maintenance and operational overhead in the future. Confluent doesn't just help us get the most out of Kafka, it also makes things a lot easier. For instance, Confluent Cloud offers pre-built, fully managed, Kafka Connectors that make it easy to instantly connect to popular data sources and sinks.”
My main incentive for attending the event? No, not the cool swag.
Instead, it was the fireside chat scheduled with Nasdaq’s Ruchir Vani.
Nasdaq’s Director of Software Engineering, Vani, shared why they are planning to move to Confluent from OSS Kafka.
Vani runs the Data Platform team and is responsible for the Nasdaq Data Link streaming platform. Data Link ensures efficient delivery of real-time exchange data and other fundamental financial information via cloud API to its customers.
While OSS Kafka was their streaming technology of choice, Kafka management has become an issue—especially as they start to scale. Here’s a quick look at the challenges associated with self-managing Kafka that Vani highlighted:
Keeping up with frequent upgrades
Managing scalability—scaling up and down as needed
Designing monitoring and alerting systems
Connector ecosystem getting more complex with the addition of each new connectors
“One of the important areas of our business is allocating the engineering resources to manage and upgrade Kafka clusters while building more data products,” said Vani. “Striking the right balance between these two processes presents a unique opportunity to unlock innovation and value within our team.”
Currently they are using Confluent in their user acceptance testing environment.
“Confluent has enabled us to add more and more datasets to our system which allows us to quickly grow and build more products. Our engineering team can focus on spending more time on developing new data products as well. Plus, it allows other internal teams to access the data they want and allows them to create new use cases,” he said.
Ruchir Vani, Director of Software Engineering, Nasdaq:
"For me, events like the Data in Motion Tour are always a good place to meet with other Kafka enthusiasts."
"Once people start getting familiar with Confluent, events like this allows them to see how customers are using the product, understand their journey, and learn the why behind choosing or considering Confluent, along with hearing about their learnings and findings."
Manoj Vasudev, Solution Architect at Clean Harbors:
"We have seen Kafka being used in major companies like Netflix, Uber, etc. But it was a great experience to hear firsthand from companies like Nasdaq. We are Confluent customers, and the reasons we went with Confluent are similar to those Nasdaq shared, which make us feel comfortable and confident with our approach."
"Based on my recent experience with this event, we will definitely recommend it to our friends and colleagues who are already into data streaming or planning to explore Confluent."
Hemanth Vedagarbha, SVP, Global Commercial Sales, Commercial Sales Management at Confluent:
"The Data In Motion Tour is about giving attendees the opportunity to learn about how data streaming can help move their business forward, and help people understand how Kafka and Confluent Cloud can work best for them."
"Not only is this a great networking event, there are educational sessions, hands-on workshops, and demos lined up, including building streaming data pipelines with our latest innovation and how to get started with Confluent Cloud in a hybrid environment. The Data in Motion Tour is also an opportunity to meet and network with several Confluent customers, partners, employees, and hear from the best industry experts on Kafka and Confluent."
Join Confluent at AWS re:Invent 2024 to learn how to stream, connect, process, and govern data, unlocking its full potential. Explore innovations like GenAI use cases, Apache Iceberg, and seamless integration with AWS services. Visit our booth for demos, sessions, and more.