Ahorra un 25 % (o incluso más) en tus costes de Kafka | Acepta el reto del ahorro con Kafka de Confluent
Building a scalable, reliable and performant machine learning (ML) infrastructure is not easy. It takes much more effort than just building an analytic model with Python and your favorite machine learning framework.
After all, machine learning with Python requires the use of algorithms that allow computer programs to constantly learn, but building that infrastructure is several levels higher in complexity. This is important to note since machine learning is clearly gainin g steam, though many who use the term do so by misusing the term.
Uber, which already runs their scalable and framework-independent machine learning platform Michelangelo for many use cases in production, wrote a good summary:
When Michelangelo started, the most urgent and highest impact use cases were some very high scale problems, which led us to build around Apache Spark (for large-scale data processing and model training) and Java (for low latency, high throughput online serving). This structure worked well for production training and deployment of many models but left a lot to be desired in terms of overhead, flexibility, and ease of use, especially during early prototyping and experimentation [where Notebooks and Python shine].
Uber expanded Michelangelo “to serve any kind of Python model from any source to support other Machine Learning and Deep Learning frameworks like PyTorch and TensorFlow [instead of just using Spark for everything].”
So why did Uber (and many other tech companies) build its own platform and framework-independent machine learning infrastructure?
The blog posts How to Build and Deploy Scalable Machine Learning in Production with Apache Kafka and Using Apache Kafka to Drive Cutting-Edge Machine Learning describe the benefits of leveraging the Apache Kafka® ecosystem as a central, scalable and mission-critical nervous system. It allows real-time data ingestion, processing, model deployment and monitoring in a reliable and scalable way.
This blog post focuses on how the Kafka ecosystem can help solve the impedance mismatch between data scientists, data engineers and production engineers. By leveraging it to build your own scalable machine learning infrastructure and also make your data scientists happy, you can solve the same problems for which Uber built its own ML platform Michelangelo.
Based on what I’ve seen in the field, an impedance mismatch between data scientists, data engineers and production engineers is the main reason why companies struggle to bring analytic models into production to add business value.
The following diagram illustrates the different required steps and corresponding roles as part of the impedance mismatch in a machine learning lifecycle:
Impedance mismatch between model development and model deployment
Data scientists love Python, period. Therefore, the majority of machine learning/deep learning frameworks focus on Python APIs. Both the stablest and most cutting edge APIs, as well as the majority of examples and tutorials use Python APIs. In addition to Python support, there is typically support for other programming languages, including JavaScript for web integration and Java for platform integration—though oftentimes with fewer features and less maturity. No matter what other platforms are supported, chances are very high that your data scientists will build and train their analytic models with Python.
There is an impedance mismatch between model development using Python, its tool stack and a scalable, reliable data platform with low latency, high throughput, zero data loss and 24/7 availability requirements needed for data ingestion, preprocessing, model deployment and monitoring at scale. Python in practice is not the most well-known technology for these requirements. However, it is a great client for a data platform like Apache Kafka.
The problem is that writing the machine learning source code to train an analytic model with Python and the machine learning framework of your choice is just a very small part of a real-world machine learning infrastructure. You need to think about the whole model lifecycle. The following image represents this hidden technical debt in machine learning systems (showing how small the “ML code” part is):
Thus, you need to train and deploy the model built to a scalable production environment in order to reliably make use of it. This can either be built natively around the Kafka ecosystem, or you could use Kafka just for ingestion into another storage and processing cluster such as HDFS or AWS S3 with Spark. There are many tradeoffs between Kafka, Spark and several other scalable infrastructures, but that discussion is out of scope for this blog post. For now, we’ll focus on Kafka.
Different solutions in the industry solve certain parts of the impedance mismatch between data scientists, data engineers and production engineers. Let’s take a look at some of these options:
While all these solutions help data scientists, data engineers and production engineers to work better together, there are underlying challenges within the hidden debts:
So how can the Kafka ecosystem help here?
In many cases, it is best to provide experts with tools they like and know well. The challenge is to combine the different toolsets and still build an integrated system, as well as continuous, scalable machine learning workflow. Therefore, Kafka is not competitive but complementary to the discussed alternatives when it comes to solving the impedance mismatch between the data scientist and developer.
The data engineer builds a scalable integration pipeline using Kafka as infrastructure and Python for integration and preprocessing statements. The data scientist can build their model with Python or any other preferred tool. The production engineer gets the analytic models (either manually or through any automated, continuous integration setup) from the data scientist and embeds them into their Kafka application to deploy it in production. Or, the team works together and builds everything with Java and a framework like Deeplearning4j.
Any option can pair well with Apache Kafka. Pick the pieces you need, whether it’s Kafka core for data transportation, Kafka Connect for data integration or Kafka Streams/KSQL for data preprocessing. Many components can be used for both model training and model inference. Write once and use in both scenarios as shown in the following diagram:
Leveraging the Apache Kafka ecosystem for a machine learning infrastructure
Monitoring the complete environment in real time and at scale is also a common task for Kafka. A huge benefit is that you only build a highly reliable and scalable pipeline once but use it for both parts of a machine learning infrastructure. And you can use it in any environment: in the cloud, in on-prem datacenters or at the edges, where IoT devices are.
Say you wanted to build one integration pipeline from MQTT to Kafka with KSQL for data preprocessing, and use Kafka Connect for data ingestion into HDFS, AWS S3 or Google Cloud Storage, where you do the model training. The same integration pipeline, or at least parts of it, can be reused for model inference. New MQTT input data can directly be used in real time to make predictions.
We just explained various alternatives to solving the impedance mismatch between data scientists and software engineers in Kafka environments. Now, let’s discuss one specific option in the next section, which is probably the most convenient for data scientists: leveraging Kafka from a Jupyter Notebook with KSQL statements and combining it with TensorFlow and Keras to train a neural network.
Data scientists use tools like Jupyter Notebooks to analyze, transform, enrich, filter and process data. The preprocessed data is then used to train analytic models with machine learning/deep learning frameworks like TensorFlow.
However, some data scientists do not even know “bread-and-butter” concepts of software engineers, such as version control systems like GitHub or continuous integration tools like Jenkins.
This raises the question of how to combine the Python experience of data scientists with the benefits of Apache Kafka as a battle-tested, highly scalable data processing and streaming platform.
Kafka offers integration options that can be used with Python, like the Confluent’s Python Client for Apache Kafka or the Confluent REST Proxy for HTTP integration. But this is not really a convenient way for data scientists who are used to quickly and interactively analyse and preprocessing data before model training and evaluation. Rapid prototyping is typically used here.
KSQL enables data scientists to take a look at Kafka event streams and implement continuous stream processing from their well-known and loved Python environments like Jupyter by writing simple SQL-like statements for interactive analysis and data preprocessing.
The following Python example executes an interactive query from a Kafka stream leveraging the open source framework ksql-python, which adds a Python layer on top of KSQL’s REST interface. Here are a few lines of the Python code using KSQL from a Jupyter Notebook:
The result of such a KSQL query is a Python generator object, which you can easily process with other Python libraries. This feels much more Python native and is analogous to NumPy, pandas, scikit-learn and other widespread Python libraries.
Similarly to rapid prototyping with these libraries, you can do interactive queries and data preprocessing with ksql-python. Check out the KSQL quick start and KSQL recipes to understand how to write a KSQL query to easily filter, transform, enrich or aggregate data. While KSQL is running continuous queries, you can also use it for interactive analysis and use the `LIMIT` keyword like in ANSI SQL if you just want to get a specific number of rows.
So what’s the big deal? You understand that KSQL can feel Python native with the ksql-python library, but why use KSQL instead of or in addition to your well-known and favorite Python libraries for analyzing and processing data?
The key difference is that these KSQL queries can also be deployed in production afterwards. KSQL offers you all the features from Kafka under the hood like high scalability, reliability and failover handling. The same KSQL statement which you use in your Jupyter Notebook for interactive analysis and preprocessing can scale to millions of messages per second. Fault tolerant. With zero data loss and exactly once semantics. This is very important and valuable for bringing together the Python-loving data scientist with the highly scalable and reliable production infrastructure.
Just to be clear: KSQL + Python is not the all-rounder for every data engineering task, and it does not replace the existing Python toolset. But it is a great option in the toolbox of data scientists and data engineers, and it adds new possibilities like getting real-time updates of incoming information as the source data changes, or updating a deployed model with a new and improved version.
Let’s now take a look at a specific and detailed example using the combination of KSQL and Python. It involves advanced code examples using ksql-python and other widespread components from Python’s machine learning ecosystem, like NumPy, pandas, TensorFlow and Keras.
The use case is fraud detection for credit card payments. We use a test dataset from Kaggle as a foundation to train an unsupervised autoencoder to detect anomalies and potential fraud in payments. The focus of this example is not just model training but the whole machine learning infrastructure, including data ingestion, data preprocessing, model training, model deployment and monitoring. All of this needs to be scalable, reliable and performant.
For the full running example and more detailed information, see the documentation.
Let’s take a look at a few snippets of the Jupyter Notebook.
Connection to KSQL server and creation of a KSQL stream using Python:
Preprocessing incoming payment information using Python:
Some more examples for possible data wrangling and preprocessing with KSQL:
The Jupyter Notebook contains the full example. We use Python + KSQL for integration, data preprocessing and interactive analysis, and combine them with various other libraries from a common Python machine learning tool stack for prototyping and model training:
Model inference and visualisation are done in the Jupyter notebook, too. After you have built an accurate model, you can deploy it anywhere to make predictions and leverage the same integration pipeline for model training. Some examples of model deployment in Kafka environments are:
As you can see, both in theory (Google’s paper Hidden Technical Debt in Machine Learning Systems) and in practice (Uber’s machine learning platform Michelangelo), it is not a simple task to build a scalable, reliable and performant machine learning infrastructure.
The impedance mismatch between data scientists, data engineers and production engineers must be resolved in order for machine learning projects to deliver real business value. This requires using the right tool for the job and understanding how to combine them. You can use Python and Jupyter for prototyping and demos (often Kafka and KSQL might be overhead here and not needed if you just want to do fast, simple prototyping on a historical dataset), or combine Python and Jupyter with your whole development lifecycle up to production deployments at scale.
Integration of Kafka event streams and KSQL statements into Jupyter Notebooks allows you to:
Python for prototyping and Apache Kafka for a scalable streaming platform are not rival technology stacks. They work together very well, especially if you use “helper tools” like Jupyter Notebooks and KSQL.
They work well with all categories of machine learning. In the case of supervised machine learning, it’s where the learn data from the program is labeled by a data scientist who is supervising the process. Unsupervised machine learning doesn’t use labels but rather figures the data cluster on its own. And reinforcement machine learning programs react based on positive and negative feedback to make improvements.
Please try it out and let us know your thoughts. How do you leverage the Apache Kafka ecosystem in your machine learning projects?
We are proud to announce the release of Apache Kafka 3.9.0. This is a major release, the final one in the 3.x line. This will also be the final major release to feature the deprecated Apache ZooKeeper® mode. Starting in 4.0 and later, Kafka will always run without ZooKeeper.
In this third installment of a blog series examining Kafka Producer and Consumer Internals, we switch our attention to Kafka consumer clients, examining how consumers interact with brokers, coordinate their partitions, and send requests to read data from Kafka topics.