Ahorra un 25 % (o incluso más) en tus costes de Kafka | Acepta el reto del ahorro con Kafka de Confluent
We are very excited for the GA for Kafka release 0.11.0.0 which is just days away. This release is bringing many new features as described in the previous Log Compaction blog post.
The most notable new feature is Exactly Once Semantics (EOS). Kafka’s EOS capabilities provide more stringent idempotent producer semantics with exactly once, in-order delivery per partition, and stronger transactional guarantees with atomic writes across multiple partitions. Together, these strong semantics make writing applications easier and expand Kafka’s addressable use cases. You can learn more about EOS in the online talk on June 29, 2017.
The recent 2017 Apache Kafka Report showed that 37% of Kafka adopters are now using the Kafka Connect API. As part of the streaming ETL paradigm, Kafka Connectors make it very simple to load data from other systems into Kafka and to extract Kafka data into other systems. The full list of Kafka Connectors has recently been updated with some new ones contributed by the community including:
Video recordings and slide decks from the sessions at Kafka Summit NYC are now available. And in a few days, the Kafka Summit committee will announce the list of sessions for the upcoming Kafka Summit in San Francisco. If you would like to attend the conference which is happening on August 28, 2017, please register soon.
Notable blogs and presentations:
If you want to directly engage with the Kafka community, you can engage in a variety of ways: Google Group, Slack, Reddit, LinkedIn, Twitter, and please join us at a Kafka Meetup group in your area! Alternatively, please feel free to reach out to us at community@confluent.io
We are proud to announce the release of Apache Kafka 3.9.0. This is a major release, the final one in the 3.x line. This will also be the final major release to feature the deprecated Apache ZooKeeper® mode. Starting in 4.0 and later, Kafka will always run without ZooKeeper.
In this third installment of a blog series examining Kafka Producer and Consumer Internals, we switch our attention to Kafka consumer clients, examining how consumers interact with brokers, coordinate their partitions, and send requests to read data from Kafka topics.