The IT and OT landscape inside factories is extremely heterogeneous; many systems are necessary to efficiently and effectively produce goods. This enables purpose-built software with fast iterations, but obviously requires integrations between systems if data should be made available to other systems along the life cycle. In this article, we will develop a one-way integration: exporting data originating from the warehouse to the ERP system.
In an earlier blog post we went through the process of designing event schemata for a warehouse logistics solution, and sketched the core functionality of the
SkuFish. This fish’s main responsibility is to track the metadata and current
location of an individual stockkeeping unit (SKU) inside a warehouse.
As the ERP system shall remain the record of truth for material, we now extend our solution with the functionality to export material movements. In this post, we will implement an application designed to export bookings to an ERP system.
We'll walk through the specifics of this implementation step by step in the following, starting from the API the ERP system offers, converting the data originating from the shop floor, constructing an entity responsible for exporting, and finally we will give an outlook on how to react to errors and failures.
What the ERP expects
In this case, we're assuming that the ERP system offers a HTTP based API accepting material bookings, and ignore any security considerations for the sake of this blog post.
Each movement done by the warehouse workforce can be exported 1-1 to the ERP; let's illustrate this with an example:
- SKU A (Articleno. 42), Quantity 500 has been moved from location X to Z
- SKU B (Articleno. 42), Quantity 1000 has been moved from location Z to Y
These two movements will result in quantity changes in locations X, Y, and Z for the article no. 42 in the state of the ERP system. However, this logic is encapsulated in the interface provided by the ERP system, and requires exactly one booking request to be made for each movement.
Let's take a look at the format the ERP expects for material movements:
We also add a convenience function to convert from the
SkuMovedEvent to the
Now we piece together the actual entity that is responsible for keeping track on which movements have been exported to the ERP system, and their respective outcomes (they might be pending, in-flight, errored, or successful).
The fish needs to consume
SkuMovedEvents to learn about new movements from the warehouse, and will emit either
BookingErrored events persisting the outcome of an actual booking:
To keep track of pending requests, this fish keeps a log of bookings to be booked. After a booking has been executed, the pending log is pruned. This logic can be formulated as:
To get a predictable unique ID (remember: onEvent needs to be deterministic and pure) for each booking, we use the
eventId field of the event's metadata.
With the release of the Actyx Pond Version 2 (check out this post for an overview), an event can have any number of
tags, and can be queried using any combination of them. This means, an event is no longer bound to a single event
stream originating from one fish, but can belong to many streams, and individually consumed. Here, instead of stringly
typed tages, we're using the
TypedTag feature of the Actyx Pond to link event types to explicit tags.
skuMoved tag is used to identify the
identify the respective events of this entity. The
erpBooking tag identifies any ERP bookings by their unique id, and
can be used to construct tags looking like
Now that we have formulated all of the necessary bookkeeping, we can construct the complete
But wait, all this boilerplate, and we have not made a single request to the ERP system, yet! This is where the briefly
emissionController steps onto the stage.
Emissions to the ERP system
MovementTrackingFish outlined above is fed by both external events, originating from the
SkuFish, and its
internal events for bookkeeping. So far, we only implemented the conversion from
Movements to pending
onEvent function above. Now what's left is doing the actual API request, and persisting the outcome of the API
This pipeline will be installed as a continuous state effect on the pond as follows:
This implementation is straight forward, as it relies on the following facts:
- There is 1-1 to relationship between an event and an ERP booking
- When used together with
pond.keepRunning, Actyx Pond guarantees that the installed state effect is executed in a strictly serialized fashion, and all prior generated events are applied the state passed into every subsequent execution, so no booking will be done multiple times
In a future article, we will look how to implement a n-to-1 relationship from events to bookings. In the meantime, you can check out this blog post by our CTO Dr. Roland Kuhn on how to build a reporting pipeline using Differential Dataflow.
Reacting to Failed Bookings
Now, as we saw above, the API requests to the ERP system could fail because of different reasons. In case of unavailability, we might just implement a retry mechanism. In other cases, where certain business rules might prohibit accepting a booking, this will need human intervention, usually by the warehouse logistics manager. For that, we may add a user interface displaying a log of the last exported bookings and their error state. In a future post, we will explore how to confidently extend an existing solution with such functionality, and deploy it as a new application running ontop of ActyxOS.