Ahorra un 25 % (o incluso más) en tus costes de Kafka | Acepta el reto del ahorro con Kafka de Confluent

Real-Time Inventory in Retail with Confluent Cloud

Escrito por

The landscape for how a business interacts with its customers has changed dramatically over the last several years. No longer is day-old data acceptable to customers or the business. One of the sectors most impacted by this seismic change has been retail. Long gone are the days when the local store for a retail chain was the touch point for all customer experiences. A survey by Salesforce found that 69% of customers believe companies should offer new ways to get existing products and services with an eye toward convenience. In other words, the customer expects to be able to decide what to buy as well as how, and wants to be fully informed as they do so.

Challenges with maintaining a consistent, up-to-the-minute view of inventory

To provide these new, expanded customer experiences, data is required—namely, enriched and up-to-the-minute data from many different sources. These sources are often inextricably linked from a business perspective. Data for the customer, product, sales, location, merchandising, distribution, supply chain, and a host of others have an impact on every decision that is made. Included in the list is accurate inventory. Without the ability to know what product is where at any given point in time, sales and revenue are lost, the wrong data is used in merchandising decisions, and customer experiences suffer. Transactions that affect inventory include returns, purchases, exchanges, shrink, receipts, shipments, cycle counts, interstore transfers, and in-transit processing. 

The lack of a consistent, current view of inventory is something that has plagued retail organizations for years. This problem has been exacerbated by newer structures such as multiple channels to purchase. Customers experience great frustration searching for a product online and seeing it displayed as available in a particular store location, only to arrive shortly thereafter and find that it is not available. This is normally due to stale data used by systems responsible for showing current inventory.   

Another example of inventory having an impact in retail is its critical usage in allocation decisions. When inventory is received, it is normally put away in a warehouse for subsequent allocation to a store or distribution center. How much more effectively could inventory be managed if upon receipt it could immediately be allocated for distribution to a location based on current inventory and sales data? This will become even more important as channels are crossed and online purchases in which the customer wishes to pick up the product in store become the norm.

Streaming solution with Confluent Cloud

Retailers have historically attempted to deal with each of these challenges by throwing more hardware at the problem, allowing them to shorten the time between batch feeds of sales, receipts, shipments, and other data impacting inventory. The core problem remains the same, however; old data is being used for critically important current decisions. Cumbersome and tightly coupled job schedules result in technical debt that is difficult to refactor, creating gaps in data for buyers, allocators, and most importantly, customers.

In order to more effectively deal with this problem, retail customers can use Confluent’s data streaming platform to consume data in near real time from multiple sources that affect inventory. They can then process the data to create a real-time stream of all transactions regardless of source, enriched to provide quantities available by SKU at each store location. This enriched stream is then consumable by services such as available to commerce inventory quantities, replenishment systems, allocation systems, as well as merchandising applications. The architecture includes the use of over 120 pre-built connectors available in the Confluent connector hub, as well as stream processing tools such Kafka Streams, ksqlDB, and Confluent’s Flink offering

How Walmart Uses Confluent to Make Real-Time Inventory & Replenishment a Reality

An example reference architecture for this use case is shown below:

Confluent database source connectors extract changes from both the in-store and e-commerce application databases and publish each to Kafka topics. Connectors are no-code solutions that allow an organization to begin moving to a data-in-motion paradigm without changing an existing source application on day one. Over time, the strangler pattern can be used to migrate legacy source systems to a more modern, easily refactored architecture. Once the changes are published to Apache Kafka®, we can represent each as an abstraction using ksqlDB. Below is an example of creating streams for sales and receipts.

create stream sales_s 
  with (kafka_topic = 'sales', 
        value_format = 'AVRO', 
        timestamp = 'ts');

create stream receipts_s(txnid bigint,
                         ts bigint,
                         sku varchar,
                         quantity int) 
  with (kafka_topic = 'receipts_json', 
        value_format = 'JSON', 
        timestamp = 'ts');

create stream receipts_avro_s 
  with (kafka_topic = 'receipts', 
        value_format = 'AVRO') as 
  select * from receipts_s;

Subsequently, we create an inventory stream that effectively “unions” the two together, with sales reflected as negative quantities and receipts as positive quantities.

create stream inventory_s(ts bigint, 
                          sku varchar,
                          quantity int, 
                          type varchar) 
  with (kafka_topic = 'inventory', 
        value_format = 'AVRO', 
        timestamp = 'ts');

insert into inventory_s 
  select ts, 
         sku, 
         quantity, 
         'R' as type 
    from receipts_avro_s;

insert into inventory_s 
  select ts, 
         sku, 
         quantity*-1 as quantity, 
         'S' as type from sales_s;

create table stock_on_hand 
  with (kafka_topic = 'stock_on_hand', 
        format = 'AVRO') as 
  select i.sku as sku, 
         sum(quantity) as stock_on_hand 
    from inventory_s i 
    group by i.sku;

Finally, we construct a ksqlDB table that not only reflects the current state of inventory in real time, but can also be queried to obtain the current inventory for a given SKU.

ksql> select * from stock_on_hand where sku = '3';
+---------------+---------------+
|SKU            |STOCK_ON_HAND  |
+---------------+---------------+
|3              |180            |
Query terminated
ksql> select * from stock_on_hand where sku = '3';
+---------------+---------------+
|SKU            |STOCK_ON_HAND  |
+---------------+---------------+
|3              |151            |
Query terminated
ksql> select * from stock_on_hand where sku = '3';
+---------------+---------------+
|SKU            |STOCK_ON_HAND  |
+---------------+---------------+
|3              |137            |
Query terminated
ksql> select * from stock_on_hand where sku = '3';
+---------------+---------------+
|SKU            |STOCK_ON_HAND  |
+---------------+---------------+
|3              |107            |
Query terminated
ksql>

Using a Confluent sink connector to distribute real-time inventory data to an engine such as MongoDB can provide enterprise capabilities such as random search based on curated indexes.  Many organizations are doing this to unlock the power of real-time inventory changes in a way that enhances the customer experience. For example, the receipt of a product in a store can immediately show up in the order management system (OMS). In addition to other dimensions such as gross margin for that SKU at that location as well as distance to the customer, decisions can be made for which location is best for fulfillment.

Another example is real-time search in which a product is only listed if it is in stock at a particular location. Customer-specified filters are often provided and include inventory. However, this inventory data is often out of date by as much as several hours or more. Customer satisfaction levels are reduced when they rely on information as to which product is available at a location, only to arrive at the store and find it is not available.

The result is a real-time, consistent view of inventory across online and all physical stores. Every second can impact millions of dollars in sales. Having the streaming architecture in place to optimize inventory translates to greater revenue capture. Businesses can deliver reliable shopping experiences that keep customers coming back for more, better estimate demand, ensure availability and timely fulfillment and delivery, and implement automation use cases such as a real-time replenishment system.

On the operations side, there's tremendous business value in reduced complexity, shortened cycle times, greater scalability and resiliency, and overall reduced cost.

Here are additional resources to learn more about streaming with Confluent: 

Ready to harness the power of your real-time data? 

  • With over 20 years of experience in sales engineering, Steve loves engaging with customers to assist in changing their view of data from historic databases and transient messaging to real-time streaming. Previously, he was a Principal Architect at Express and Database Systems Architect at OCLC, Inc.

¿Te ha gustado esta publicación? Compártela ahora

Win the CSP & MSP Markets by Leveraging Confluent’s Data Streaming Platform and OEM Program

This blog explores how cloud service providers (CSPs) and managed service providers (MSPs) increasingly recognize the advantages of leveraging Confluent to deliver fully managed Kafka services to their clients. Confluent enables these service providers to deliver higher value offerings to wider...


Atomic Tessellator: Revolutionizing Computational Chemistry with Data Streaming

With Confluent sitting at the core of their data infrastructure, Atomic Tessellator provides a powerful platform for molecular research backed by computational methods, focusing on catalyst discovery. Read on to learn how data streaming plays a central role in their technology.