Ahorra un 25 % (o incluso más) en tus costes de Kafka | Acepta el reto del ahorro con Kafka de Confluent

Unleash Real-Time Agentic AI: Introducing Streaming Agents on Confluent Cloud

Escrito por

As AI models become commoditized, the conversation is shifting from building smarter models to building data infrastructure that turns models into real business value. Enterprises are accelerating their adoption of agentic AI—systems that don’t just predict but plan, decide, and act autonomously—across their software and operations. To realize the full potential of AI-driven automation, they need agents that can serve as the eyes and ears of the business, monitoring and reacting to events as they happen. But what’s holding teams back isn’t the lack of great models—it’s the lack of real-time, trustworthy data and a scalable, resilient way to deploy agents in production.

If you’ve spent time building agents, you’ve undoubtedly felt the pain: cobbling together disparate systems, agent abstractions that get in the way of debugging, testing, and evaluations, all while trying to gather the context your agents need to reason effectively. Early experiments are easy, but productionizing agents hits a wall: data processing is disconnected from the agent, resulting in multiple systems and even new databases needed to move and share data. Multi-agent systems quickly balloon into complex brittle monoliths. Just as event-driven microservices revolutionized software architecture, it’s time for a similar transformation for agentic AI.

Today, we’re announcing Streaming Agents on Confluent Cloud, which enables you to build, deploy, and orchestrate event-driven agents natively on Apache Flink®. Embedded in data streams, Streaming Agents can monitor and act on what’s happening in the business in real time to power intelligent context-aware automation.

Unifying data processing and AI for intelligent context-aware automation

With Streaming Agents, you can:

  • Unify stream processing and agentic AI workflows using familiar Flink APIs, simplifying development and enabling every engineer to be an AI engineer

    ‎ 

  • Seamlessly integrate with any tool, model, and data system 

    ‎ 

  • Access real-time context to allow agents to operate dynamically on live operational events and effectively use large language models (LLMs) as reasoning engines to plan, decide, and act

    ‎ 

  • Ensure agents are secure and trustworthy with full visibility, control, and secure, governed event flows

Ready to turn agents from demos into production-ready, event-driven multi-agent systems? Here’s how Streaming Agents make that possible.

Event-Driven by Design, Always On and Replayable

At their core, Streaming Agents are event-driven microservices with a brain. Built on a unified platform with fully managed Flink and Apache Kafka® powered by Kora engine, they’re decoupled, produce and consume events asynchronously, and are designed to scale reliably to multi-agent systems. Streaming Agents operate on the business in motion—where everything starts with a stream, not a static snapshot. This means they don’t wait for a batch job or human trigger to act. They’re always on, continuously ingesting, reasoning, and reacting to data the moment it arrives.

But it’s not just about real-time execution—every input an agent sees is part of an immutable event log, so actions are not only reactive but also replayable. You can rewind the stream, whether to recover from failure, test new logic, or audit agent decisions after the fact. This gives Streaming Agents memory, a foundation for retraceability, safety, and evolution over time.

Why Streaming Agents Are Uniquely Suited for Enterprise Workflows

Streaming Agents have core capabilities that make them purpose-built to power real-time automation at enterprises. Here’s how you can use Streaming Agents to continuously perceive, reason, and act as events unfold:

  • Unify stream processing and AI

    ‎ 

    Traditional enterprise architectures and agentic AI projects are slowed by batch-based pipelines, siloed data, and the need to stitch together brittle integrations for data processing, model inference, and orchestration. Streaming Agents break through these roadblocks by letting you create event-driven agents directly in Flink on top of Kafka—combining stream processing, AI reasoning, and agent orchestration in a single, unified platform. This means every engineer can be an AI engineer, using familiar Flink APIs to develop and deploy powerful context-aware agents within existing stream processing workflows.

    ‎ 

  • Access real-time context for reliable decisioning

    ‎ 

    Agents are only as useful as the data they can access. Because Streaming Agents live inside the event streams at the core of your business, they process events as they happen rather than operating on static snapshots. This fresh context gives Streaming Agents the most accurate, up-to-date understanding of the business, improving decision quality whether you’re automating anomaly investigation, real-time personalization, responding to customer activity, or adapting to operational changes. 

    ‎ 

  • Seamlessly integrate with any model, tool, and data system

    ‎ 

    Production-ready agentic AI systems require integrations across many internal and external tools and systems. Streaming Agents provide native support for connecting to LLMs and embedding models, tool invocation with Model Context Protocol (MCP), contextual search, and data enrichment from external non-Kafka systems. These built-in capabilities eliminate the need for teams to code one-off integrations. And the extensibility empowers Streaming Agents to collaborate with other agents and systems, and take orchestrated actions.

    ‎ 

  • Ensure agents are secure and trustworthy to iterate faster, safely

    ‎ 

    Real-world agentic AI applications demand more than just powerful automation—they require security and governance. Streaming Agents are designed from the ground up to be production-ready: secure, scalable, and equipped with built-in replayability. Teams can debug, iterate, and test new agent logic using actual event data, without risking live impact. This enables fast and safe rollout of new functionality. Stream Governance on Confluent Cloud, the industry’s only data streaming governance suite—with lineage tracking, schema enforcement, role-based access control (RBAC), monitoring, and audit logging—ensures that Streaming Agents are always trustworthy and compliant, no matter how data flows.

    ‎ 

Under the Hood: What Makes Streaming Agents Different

Streaming Agents empower you to use familiar Flink APIs to bring together stream processing and agentic AI. Instead of juggling disparate frameworks and building fragile pipelines, you can now build, test, deploy, and orchestrate agents directly on Flink. This unified approach ensures that every developer—not just AI/ML experts—can contribute to delivering context-aware, intelligent automation while using tools they already know.

Running as Flink jobs alongside stream processing gives Streaming Agents unmatched context and responsiveness—as well as event-driven replayability, observability, and governance not found in other agent frameworks.

This enables high-value use cases such as:

  • Real-time product personalization: Enrich real-time customer activity with data from loyalty programs, purchases, etc. and feed this context to the LLM to identify preferences and intent, instantly sending offers to boost in-the-moment engagement and sales.

    ‎ 

  • Anomaly investigation: Monitor and perform real-time anomaly detection using the built-in ML function on high-velocity streams (e.g., system metrics, network traffic, sensor data). Contextualize events by joining with metadata from incident records and threat feeds to cluster related anomalies and reduce noise. Immediately provide this to the LLM to identify root causes and route to relevant teams to reduce mean time to resolution (MTTR).

Here’s a closer look at the key features of Streaming Agents:

Model Inference

Streaming Agents integrate AI models seamlessly into your streaming data pipelines. Work directly with LLMs (e.g., OpenAI, Azure Machine Learning, AWS SageMaker) within Flink SQL queries with native support for remote model endpoints. This approach enables real-time reasoning, retrieval-augmented generation (RAG), and dynamic decision-making based on the freshest available data. Models are managed as first-class resources in Flink SQL, simplifying agentic workflows.

Real-Time Embeddings

Streaming Agents leverage real-time embeddings to turn unstructured enterprise data into vector embeddings, continuously supplying up-to-date context for RAG and semantic search, mitigating LLM hallucinations. You can use any embedding model (e.g., OpenAI, Amazon, Google Gemini) for any vector database (e.g., MongoDB Atlas, Pinecone, Elastic, Couchbase) across any cloud. Save time with the Create Embeddings Action, a no-code shortcut that lets you vectorize data in just a few clicks, ensuring that data is always fresh for agent tasks.

Built-In ML Functions

Streaming Agents come with out-of-the-box ML functions for forecasting and anomaly detection on time-series streaming data, with out-of-the-box configuration (Auto-ARIMA) or custom user configuration (e.g., training size, seasonality, forecast horizon). With simple functions (e.g., ML_FORECAST and ML_DETECT_ANOMALIES), you can simplify complex data science tasks into Flink SQL and derive real-time insights without needing dedicated ML expertise, model building, or separate tooling. Real-time visualizations—charts and graphs for forecast and anomaly events—are available for continuous monitoring and auditing.

Tool Calling With MCP

Tool calling with Anthropic’s open protocol MCP enables Streaming Agents to contextually invoke tools (defined in an MCP server or as UDFs) based on real-time business events. This brings agent tool calling into streaming pipelines—agents can decide, trigger, and use the right tools at the right moment, with each tool interaction logged for traceability and auditability.

Connections

Connections in Streaming Agents provide a secure, reusable way to integrate and manage connectivity with external systems, including relational databases, vector databases, REST APIs, AI models, and MCP. Connections ensure that sensitive credentials (secrets) are managed securely and never exposed in catalog metadata, logs, or configuration files. They provide reusability, allowing the same connection to be shared across multiple tables, models, and functions as well as centralizing connection management for large-scale deployments in production—critical for maintaining enterprise security and compliance at scale.

External Tables and Vector Search

Streaming Agents come with enhanced data enrichment for more accurate AI decision-making. You can easily join data streams with non-Kafka sources (e.g., relational databases, vector databases, REST APIs) using Flink SQL. Built-in vector search enables Streaming Agents to perform both vector search for RAG as well as instant external table lookups, eliminating complex data synchronization. This native capability means agents have access to the most current, complete, and accurate view of enterprise data for real-time reasoning, all while maintaining the reliability and observability benefits of Flink.

Get Started With Streaming Agents

Ready to create your first agent?

Streaming Agents help you build, deploy, and orchestrate event-driven agents that continuously act on live operational events—no more fragmented workflows, brittle integrations, or stale inputs.

It’s never been easier to deliver intelligent real-time agents that automate for the business in motion. Start your journey with Streaming Agents and turn agents from demos into production-ready, event-driven multi-agent systems.

Get started today:


Apache®, Apache Kafka®, Apache Flink®, Flink®, and the Flink logo are trademarks of the Apache Software Foundation in the United States and/or other countries. No endorsement by the Apache Software Foundation is implied by using these marks. All other trademarks are the property of their respective owners.

  • This blog was a collaborative effort between multiple Confluent employees.

  • Mayank is a Product Manager for Stream Processing at Confluent. He holds extensive experience of building and launching enterprise software products, with stints in VMware, Amazon, and growth-stage startups Livspace and Bidgely.

    Mayank holds an MBA with a specialization in Artificial Intelligence from Northwestern University, and a Computer Science degree from BITS Pilani, India.

¿Te ha gustado esta publicación? Compártela ahora