Documentation for a newer release is available. View Latest
Esta página no está disponible actualmente en Español. Si lo necesita, póngase en contacto con el servicio de asistencia de Icon (correo electrónico)

Anatomy of a Typical IPF Orchestration Application

Most of the time, a typical IPF orchestration application will consist of the following components:

Diagram
  • Domains represent DSL-defined (and MPS-generated) components that provide the orchestration logic:

    • Flow Event Sourced Behaviours (ESBs) are the source of truth when it comes to orchestration - they implement the finite state machines that drive the business logic and are fully generated by MPS. Since ESBs rely on event sourcing, they read and persist domain events by calling the appropriate Akka Persistence Plugin

    • Input Adapters are components that are used to send inputs to Flow ESBs and thus trigger transitions between the finite state machine states; the adapters are fully generated by MPS.

    • Action Adapters are components defined by the IPF orchestration service that implement the Action Ports generated by MPS and are wired into the Domains by the service.

  • Receive Connectors are components made available via IPF system connectivity libraries and implemented and configured by the orchestration service to accept appropriate messages from external domains. Most of the time, they will be receiving the message, converting it into an appropriate Input and delegating the handling of said input to an Input Adapter.

  • Send Connectors, like Receive Connectors are part of the IPF system connectivity libraries and are also implemented and configured by the orchestration service, but in this case for sending messages to appropriate external domains.

  • Cache Adapters are components made available via the IPF Cache libraries and configured for use by the orchestration service. Most of the time they are used in front of a Request Reply Send Connectors, typically for caching results of HTTP calls; they delegate to the connectors in case of a cache miss but otherwise return cached results.

IPF components do not work in isolation and the way they typically connect with each other is best illustrated by inspecting how a message typically flows through an IPF orchestration service. Coupled to the message flow is the way components apply backpressure to their upstream, which translates to the way backpressure is applied within the service as a whole. Understanding how a message flows through the system and how and where backpressure is applied will prove invaluable in any troubleshooting, capacity planning or performance tuning task.

Message Flow

Diagram

You can find a brief description of each of the steps below. Steps 1-3 and 4-10 are likely to be performed on different IPF orchestration service nodes.

  1. Messages usually enter an IPF orchestration service via a Receive Connector

  2. The connector transforms the received message into an appropriate Input and calls its Input Adapter

  3. The sole purpose of the adapter is to convert the input into a command and send it to the appropriate Flow ESB, which may reside on a different service node

  4. When it receives the command, the Flow ESB first checks whether the command should be applied and if affirmative, an event is persisted using the persistence plugin. If the command should not be handled, further steps are skipped.

  5. (Optional) Akka Persistence Plugin converts the events into database write commands and communicates with the database

  6. (Optional) If there are configured actions that are to be triggered, Flow ESB triggers them now via the appropriate Action Adapters. If no actions are to be triggered, further steps are skipped.

  7. (Optional) Depending on the action type, several things might happen:

    1. A regular Send Connector is called, meaning the action will just be sending a message to the external domain and any potential response to that message from the external domain will be received via a Receive Connector. Alternatively, an uncached Request Reply Send Connector may be called.

    2. A cached Request Reply Send Connector is called, meaning the call is first made to the configured Cache Adapter.

      1. On a cache miss, the Request Reply Send Connector is called and the result of the operation is stored in the cache for future use.

      2. On a cache hit, we skip to 9.

    3. The action does not require the use of a connector, and we skip to 10.

  8. (Optional) The Send Connector converts the action’s data into a format that the external domain expects and sends it using the configured transport. If the connector is not a Request Reply Send Connector, further steps are skipped.

  9. (Optional) The Request Reply Send Connector receives the response from the external domain and returns it to the Action Adapter

  10. (Optional) The Action Adapter converts the response into an appropriate Input and calls its Input Adapter. The flow continues from 3.

Backpressure

Most IPF components are built on top of reactive streams implementations, and one of the features that reactive streams offer is back-pressure - a means of flow-control and a way for consumers of data to notify a producer about their current availability, effectively slowing down the upstream producer to match their consumption speeds.

Seeing how, as explained in the message flow section above, all the IPF components are intricately linked together, their backpressure mechanisms will often combine. What this means in practice is that Receive Connectors are the final target of backpressure for the whole IPF orchestration service, and the backpressure from the orchestration service’s downstream dependencies will translate up to them. This also means that the speed at which the Receive Connector consumes and processes work is determined by the processing speed of the entire message flow and is no faster than its slowest step.

  1. Each Receive Connector will have a small buffer of messages that are allowed to be processed in parallel; when the buffer is full, the connector will backpressure but if and how the backpressure is translated to the upstream is dependent entirely on the connector’s transport. Messages are only removed from the buffer when their processing completes (successfully or not). Depending on the connector’s resiliency settings, processing of a failed message can be retried a configurable number of times, meaning that steps 2-10 may be repeated several times on error, until the max number of attempts is reached, or until the configured timeout duration elapses.

  2. An Input Adapter offers no backpressure mechanisms of their own but instead just propagate downstream backpressure by waiting to receive a successful processing outcome message, signaling that steps 3-10 are successfully complete. The adapters come with their own RetrySupport - a failed call to the Flow ESB will be retried a configurable number of times before the failure is propagated to the connector.

  3. Each call to the Flow ESB from an Input Adapter will use Akka’s support for location-transparent cluster messaging and will wait for a configured amount of time for the processing outcome message to be received before timing out and signaling to the adapter’s RetryStrategy that the attempt has failed. If the Flow ESB determines that the command should not be applied (e.g. invalid for current state, a duplicate etc) a processing outcome is sent to the Input Adapter and further steps are skipped.

  4. The speed at which commands are processed directly depends on the speed at which the Akka Persistence Plugin is capable of persisting them. Once the event is persisted, the Flow ESB is free to accept new commands. The processing outcome will be sent asynchronously once all the actions are complete (see more in 6).

  5. If the database client is reactive, it may apply some backpressure mechanisms of its own.

  6. All configured actions that the Flow ESB triggers are called asynchronously and in parallel with each other. Only once all of them are complete is the processing outcome sent to the Input Adapter from step 2. It’s important to mention that action retries scheduled by the Flow ESB are not part of the backpressure flow and do not impact the processing outcome in any way.

  7. Depending on the action type, different backpressure rules apply:

    1. Both the regular Send Connector and the Request Reply Send Connector provide a buffer for messages being sent in parallel; once the buffer is full, send connectors backpressure.

    2. Cache Adapters typically offer no backpressure mechanisms, but in most cases this isn’t an issue since reading the cache is an in-memory operation. The writes will backpressure on the send connector call first, but then if a remote call is needed, the slowness of that call will translate all the way up to the Receive Connector.

  8. Both types of Send Connector support configurable retries and timeouts. Messages are only removed from the connector’s buffer when their processing completes (successfully or not) so depending on the connector’s resiliency settings, this step can be performed multiple times per message.

  9. Slow responses will cause the Request Reply Send Connector to translate the backpressure upstream.

  10. The flow continues from step 3 and all the backpressure signals will be passed on to the Receive Connector from step 1.