Ahorra un 25 % (o incluso más) en tus costes de Kafka | Acepta el reto del ahorro con Kafka de Confluent
Implementing schemas over your data is essential for any enduring event streaming system, particularly ones that share data between different microservices or teams. Schemas enforce the implied contract between applications that produce your data and downstream applications that consume your data.
Schema Registry 101 is an introductory course during which you will learn how to use schemas and a schema registry to establish this contract.
In the first course module, you will learn how the schema registry provides what you need to keep client applications in sync with the data changes in your organization or business.
This module is followed by a hands-on exercise during which you will learn how to configure applications to connect with a Kafka cluster, Schema Registry, and ksqlDB in Confluent Cloud. This will prepare you for the hands-on exercises that follow several course modules.
In this module you will learn about the workflow of using schemas including writing schema files, adding them to a project, and leveraging tools such as Maven and Gradle to generate the model objects that schemas represent as well as register and update them in a schema registry.
In the hands-on exercise that follows, you will build, configure, and register Protobuf and Avro schemas. During the exercise you will:
Examine the settings in a Gradle configuration file
Configure Protobuf and Avro schema definitions
Generate model objects from the schema definitions using Gradle
Register the schemas in Confluent Cloud Schema Registry
In the schema formats module, you will learn about Protobuf and Avro schema definition formats and how to work with generated objects that are built from each.
In the managing schemas module, you’ll learn that schema management in large part revolves around registering schemas in the schema registry and you will learn about several methods for doing so. You will also learn how schema IDs are automatically assigned when schemas are registered. Also how schema version numbers are assigned as schemas evolve and the resulting new schema versions are registered. This module also shows how you can view and retrieve schemas from the schema registry.
In this module you will take what you’ve learned so far about schemas and schema registry and put it into action—working with client applications. You will start with the Confluent CLI and the console Kafka producer and consumer clients that ship with Schema Registry. You will then learn how to integrate KafkaProducer and KafkaConsumer clients as well as ksqlDB.
In the hands-on exercise that follows you will practice what you just learned.
In this module you will learn about the concept of the schema subject, the different strategies for subject naming, and how to apply them. You will also learn how the schema subject name is used for compatibility checks as well as schema versioning.
In the final course module you will learn about schema compatibility, the compatibility modes you have at your disposal when using Confluent Schema Registry, and how to make use of them. You will also learn how the Confluent Schema Registry verifies schema compatibility based upon the compatibility modes that you assign to each schema subject. These checks establish guardrails that guide you to safely update schemas and allow you to keep your clients operational as they are updated with the changes.
In the hands-on exercise that follows, you will evolve Protobuf and Avro schemas that you created in prior exercises. You will verify the compatibility of the evolved schemas, identify the cause when a compatibility check fails, make the required correction and verify the resulting successful compatibility check.
Learn more about Confluent Schema Registry by taking the full course on Confluent Developer!
Here are some additional resources where you can learn more about schemas:
Documentation: Schema Registry Tutorials
We are proud to announce the release of Apache Kafka 3.9.0. This is a major release, the final one in the 3.x line. This will also be the final major release to feature the deprecated Apache ZooKeeper® mode. Starting in 4.0 and later, Kafka will always run without ZooKeeper.
In this third installment of a blog series examining Kafka Producer and Consumer Internals, we switch our attention to Kafka consumer clients, examining how consumers interact with brokers, coordinate their partitions, and send requests to read data from Kafka topics.