Ahorra un 25 % (o incluso más) en tus costes de Kafka | Acepta el reto del ahorro con Kafka de Confluent
When we first introduced Streaming Agents, we were solving a fundamental challenge: Every AI problem is a data problem. When data is missing, stale, or inaccessible, even the most advanced agents and LLMs fail to deliver. How do we build scalable agents that aren’t just powerful in isolation, but part of multi-agent systems that are event-driven, replayable, and grounded in accurate data? Developers told us they wanted more than just tools—they needed to access data, move from prototype to production, and clearly understand what was happening inside their agents in order to debug, evaluate, and iterate. Streaming Agents make that possible and today, we’re taking it a step further with new features designed to help teams build faster, gain enhanced observability, and improve AI decision-making with real-time context.
Today also coincides with the launch of Confluent Intelligence, a fully managed service that comprises all AI offerings on Confluent’s data streaming platform to deliver real-time, context-rich, and trustworthy AI systems using Apache Kafka® and Apache Flink®. It advances our vision of making enterprise-grade streaming AI seamlessly integrated into every moment of the business.
As part of Confluent Intelligence, Streaming Agents unify data processing and agentic AI workflows by empowering teams to build, test, deploy, and orchestrate event-driven agents directly on Flink. This gives agents the unique ability to access fresh context in streaming pipelines to effectively monitor, reason, and act on events for intelligent automation.
With Streaming Agents, every engineer can use familiar Flink APIs to build secure and trustworthy agents, with native support for Model Inference, Tool Calling with Model Context Protocol (MCP), Embeddings for retrieval-augmented generation (RAG), Built-in ML Functions, External Tables and Search, and Connections. We’re continuing to expand on these capabilities and deliver more streamlined developer experiences.
What’s new in the Q4’25 release:
Agent definition – Quickly build agents in just a few lines of code and unlock more sophisticated tasks with better outcomes by iteratively evaluating and adapting tool calling.
Observability and debugging – Gain visibility into all agent actions, easily diagnose issues to accelerate resolution, and reliably recover from failure.
Real-Time Context Engine – Using MCP, serve fresh context to Streaming Agents to improve agent decision-making and the quality of outputs.
Streaming Agents thrive in an interoperable ecosystem and come with product integrations with leading technologies across the AI stack that make it easier to build and scale. With Amazon Web Services (AWS), Google Cloud, and Microsoft Azure, you can build agents on your preferred cloud, call LLMs, and leverage AI services such as Amazon Bedrock, Google Cloud’s Vertex AI platform, and Azure AI services. Ingest high-value data from data sources, including SAP and Salesforce, and process them in real time to create the most complete, up-to-date, and accurate view of operational events. With vector databases such as MongoDB, Elastic, Pinecone, Azure Cosmos DB, and more along with Vectara, you can continuously hydrate your context stores with real-time embeddings, enable RAG, run Flink Vector Search, and synchronize data with sink connectors.
Continue to leverage your frameworks and model providers of choice such as Anthropic and LlamaIndex – Streaming Agents can communicate with your existing tools and agents. With Real-Time Context Engine built on top of MCP, you can bring fresh context to Streaming Agents wherever you need it, at the moment a decision needs to be made. Implement Streaming Agents faster with system integrators such as BearingPoint, GoodLabs, Improving, Infosys, Intelium, msg systems, Ness Digital Engineering, Onibex, Psyncopate, and World Wide Technology so that you can accelerate demos into enterprise-grade multi-agent systems with the right AI expertise.
Together, these integrations ensure that Streaming Agents fit seamlessly into your existing AI stack and enhance your AI tools with enriched data flows and observability—so you can develop faster, stay grounded in context, and get to production with confidence.
Streaming Agents are designed to handle high-volume, real-time data and continuously evolving context, making them ideal for enterprise use cases for which fresh information, accuracy, and observability are critical. By continuously monitoring data streams and using context from diverse sources, Streaming Agents can make intelligent, informed decisions and automate actions that drive better outcomes.
High-value use cases include:
Real-time fraud prevention – Continuously ingest and process transaction data, detect anomalies, and automatically block suspicious activity.
Dynamic customer support – Pull live context from customer relationship management (CRM) systems, chat interactions, and knowledge bases to deliver in-the-moment personalized and accurate responses.
Intelligent supply chain optimization – Track inventory, shipments, and demand signals in real time, automatically reordering stock, rerouting shipments, or adjusting production schedules based on live conditions.
Here’s a closer look at new capabilities in the Q4’25 release:
Build Streaming Agents in minutes, with simplified syntax and powerful abstractions for real-time, always-on workflows integrated with stream processing. You can define agents and tools in just a few lines of code, minimizing boilerplate work to focus on building differentiated workflows. The Agent Definition feature also enables complex tasks to be performed and optimized by allowing the LLM to interact with tools repeatedly—evaluating outputs and deciding if further action is required. This makes agents more adaptive and helps improve outcomes through iterative execution, with a configurable number of iterations. Technical features include support for create, read, update, and delete (CRUD) operations on tool and agent resources in the Flink catalog, resulting in faster development, easier testing and reuse, and smarter, multistep tool invocation.
Every agent interaction is logged, allowing development teams to trace the full execution path. Structured immutable logs deliver end-to-end traceability of each agent action, including input events, tool inputs/outputs, latencies, LLM decisions, and agent-to-agent communications. This ensures security and compliance by providing a tamper-proof record. You can debug tool call details (e.g., name, parameters, return values), retain and share agent context, and count on Flink-powered recovery to restore agents from the latest checkpoint if a crash occurs. All this deep visibility runs on production-grade, event-driven infrastructure, enabling rapid iteration, reliable testing, and resilient failure recovery without introducing risk or requiring ad hoc experiments.
Exposing trustworthy, real-time context to Streaming Agents isn’t just about the data—it’s about doing it securely, reliably, and at scale. Open protocols like MCP promise a clean abstraction, but in practice, teams are left managing their own infrastructures to make it work. That means standing up and securing custom servers, handling authentication, enforcing role-based access control (RBAC), building telemetry pipelines, and wiring everything to streaming data underneath—just to serve a single lookup. This approach is brittle and hard to govern, and it slows down the rollout of intelligent agents across the enterprise.
Real-Time Context Engine delivers live structured data to any AI agent and application—with built-in authentication, observability, and access controls. There’s no need to run your own MCP infrastructure, wire in Kafka consumers, or manage security pipelines. Instead, Real-Time Context Engine abstracts all of that behind simple, secure APIs and standard agent protocols like MCP. With authentication, RBAC, and audit logging provided out of the box, the engine ensures enterprise-grade governance. Access to live indexed data for real-time lookups means Streaming Agents as well as any other agent can have the freshest available data and make intelligent, informed decisions that deliver greater automation value.
Ready to build your first agent in minutes?
By bringing data processing and AI workflows together, Streaming Agents make it easier than ever to build intelligent agents that are event-driven, observable, and context-aware. Get more out of your AI stack by seamlessly integrating Streaming Agents with any data system, model, and tool using familiar Flink APIs on top of a secure, governed data streaming platform.
Apache®, Apache Kafka®, Apache Flink®, Flink®, and the Flink logo are trademarks of the Apache Software Foundation in the United States and/or other countries. No endorsement by the Apache Software Foundation is implied by using these marks. All other trademarks are the property of their respective owners.
Build event-driven agents on Apache Flink® with Streaming Agents on Confluent Cloud—fresh context, MCP tool calling, real-time embeddings, and enterprise governance.
This blog post demonstrates building streaming ETL pipelines with Confluent Cloud for Apache Flink using SQL to transform and enrich UK flood monitoring data. It shows how to unpack nested JSON arrays, join multiple data sources, and use Tableflow to expose the enriched data as Iceberg tables for...