Ahorra un 25 % (o incluso más) en tus costes de Kafka | Acepta el reto del ahorro con Kafka de Confluent

Best of Kafka Summit 2020 Roundup

Escrito por

If you know me, you know two things: first, that I am committed to remote work as an effective way to build a company; I’ve been a remote employee for 19 of the past 20 years, and it’s working pretty well so far. You’ll also know that I’m committed to the idea that being together in one place is an irreplaceable means of collaborating and relating to one another. Living where you want and working somewhere else is great, but you just gotta be in the same physical space sometimes, you know?

Well, not in the COVID era, and not for Kafka Summit 2020. But having seen some sessions, interacted with you all on Twitter, run a few BOFs and Ask the Experts sessions, and enjoyed a great day 1 and day 2, I really have to declare Kafka Summit 2020 to be a tremendous online success. All sessions and slides are available, and we will also be posting them on our site soon.

When we started planning the event, I was eager to preserve an event-ness to it: that sense that you are not just watching a YouTube playlist, but you are part of things happening now, and part of a group of people interacting together—maybe not face to face, but still together in the way we all do our jobs remotely these days. I have to tell you, I’m delighted at just how well this worked! The buzz and the excitement typical of a Kafka Summit were all there, just multiplied by 20x with more than 35,000 registrations from more than 10,700 individual companies. And such a larger cross-section of the planet was able to be there, with all of you representing 143 countries. 143! I am still having a hard time believing that, but it’s a wonderful thing.

Highlights

I actually got to see some sessions today, which is a rare privilege for me. Usually, Summit days are filled with meetings and rehearsals for tomorrow and everything that is not a session, but today I got to see a couple: Robin Moffatt told us about how to build a streaming data pipeline with ksqlDB, as he is uniquely qualified to do. He was on holiday this week with spotty internet but valiantly resurfaced to take questions during his pre-recorded session. I actually got to pitch in on a few answers, which was a pleasure to do.

Apache Kafka and ksqlDB in action: Let's build a streaming data pipeline

I saw a bit of Kate Stanley and Grace Jansen’s talk on reactive Apache Kafka®, which was as excellent as I’d hoped it would be. I hope these ideas get more uptake going forward.

I missed many friends’ talks, including Viktor Gamov’s talk on Kafka and testing Kafka Streams, Ricardo Feirrera’s talk yesterday on his now-famous Pac-Man application (which continues to evolve on a monthly basis), Anna Povzner’s talk on multi-tenant Kafka, and Anna McDonald’s talk on multi-region Kafka Streams applications. (I did not do well with Annas today.) But the good news is that all of these session recordings exist and are there to be played back when I have the time. Likely for me this will be while I’m cooking breakfast or dinner—I’ll try to tweet proof of this in the coming weeks.

Jay Kreps gave an inspirational keynote about the fusion of cloud-native systems and event streaming platforms, culminating in a project we at Confluent call Project Metamorphosis. His argument is that a more or less completed event streaming platform that also has the properties of a system we could credibly call cloud-native is not just two good systems added to each other, but a thing from which new properties emerge whose value is greater than linear superposition of their elements. Jay is not wrong.

And it’s all well and good for me to say what I thought of Jay’s keynote, but this word cloud shows what you thought of it. You seemed to be generally positive:

positive feedback

Sam Newman gave a characteristically lively and intellectually rich account of how we should rethink our data infrastructure to ensure that data works for us, rather than we for it. In short, he argued that building large systems around an immutable, update-in-place datastore (e.g., a database) is not the path forward for building scalable distributed applications with real-time performance requirements. Rather, we should build these systems on top of a distributed log like Kafka. For a person whose specialty is microservices and who regularly helps companies succeed in their migrations to the same, Sam is more likely than almost anyone on the planet to be right in making this claim. Watch his keynote, if you haven’t yet.

The tyranny of data with Sam Newman

And there was the closing keynote by the author of this blog post. I was able to see a lot of the Summit content before the show since it was all pre-recorded, and because as the head of the program committee, I am required to know about these things. Stepping back and looking at it all, I really got the sense that Kafka is a system that is entering a vital and growing maturity.

People, organizations, and software ecosystems mature in stages, and what works in a previous stage can seem laughably inadequate in a later one. The sorts of directions in which Kafka, broadly construed, is pushing right now—operating across cloud regions, ever-increasing elasticity, platform-native SQL stream processing, more and more of the ecosystem being drawn into managed cloud services that run everywhere—these are things that a fully-grown, modern platform does.

Kafka didn’t do these things at the first Kafka Summit in 2016. It couldn’t have, because it was working on growing in other areas, like advanced stream processing and taking its first steps in the cloud. But now these are the features we, its users and its builders, need, and sure enough, they are the features the Kafka ecosystem is growing. That’s what I saw happening at this Summit, and that’s something that’s got me fairly well excited.

So that’s a wrap on Kafka Summit 2020. I’m so glad we could be together virtually this week, and I look forward to the next time we gather online and in person. I hope to see you soon!

  • Tim es el vicepresidente de Developer Relations en Confluent, un departamento desde el que, tanto él como su equipo, trabajan para que los datos de streaming y su emergente conjunto de herramientas sean accesibles para todos los desarrolladores. Puedes encontrar a Tim como ponente habitual en distintas conferencias o en YouTube, donde desarrolla temas tecnológicos complejos de una forma más accesible. Vive con su mujer y su hijastra en Mountain View, California, Estados Unidos. Tiene tres hijos adultos, tres hijastros y cuatro nietos.

¿Te ha gustado esta publicación? Compártela ahora