Standalone Scheduler

The Persistent Scheduler can be brought into your application as an embedded dependency (see Embedded Implementation). Alternatively, the Persistent Scheduler can be setup as a standalone application with a HTTP interface for inbound interactions and a Kafka publisher for scheduled jobs.

With the standalone set-up you can:

  1. Send scheduling requests over HTTP to your Scheduling application

  2. Have your scheduling requests execute at the given time

  3. Publish the scheduled command onto Kafka

  4. Consume the command from Kafka and handle the payload in your consuming application

This page does not cover how to send requests over HTTP. That is covered in Schedule your first job

Step 1: Add Dependencies to the Scheduling Application

To set up your standalone application, you will need to add the following dependencies into your application’s pom.xml:

<dependencies>
    <dependency>
        <groupId>com.iconsolutions.ipf.core.platform</groupId>
        <artifactId>scheduler-http-controller</artifactId>
    </dependency>
    <dependency>
        <groupId>com.iconsolutions.ipf.core.platform</groupId>
        <artifactId>scheduler-external-trigger-kafka</artifactId>
    </dependency>
</dependencies>

The scheduler-http-controller artefact brings in the HTTP controller and the core Scheduler functionality.

The scheduler-external-trigger-kafka artefact will publish the scheduled command to a pre-defined Kafka topic.

Step 2: Add Dependencies to the Consuming Application

In your application that will handle the scheduled payload, you will need to add the following dependencies to the pom.xml:

<dependency>
    <groupId>com.iconsolutions.ipf.core.platform</groupId>
    <artifactId>scheduler-external-trigger-kafka</artifactId>
</dependency>

This artefact is a ReceiveConnector that consumes the messages published by scheduler-external-trigger-kafka.

Step 3: Configure the Consuming Application

Configure Kafka Header Filters

The Scheduler Kafka sending module (scheduler-external-trigger-kafka) publishes messages with the following Kafka headers:

  1. source

  2. trigger-type

This allows the consuming application to filter messages using these headers.

You originally defined the values for the message headers in the TriggerCommand field of the HTTP request to the Scheduler HTTP Controller.For a reference, see Scheduling Your First Job (via HTTP Client Library).

You can configure your consuming application to filter in specific values for these headers by adding the values to the array at the configuration paths below:

Kafka header Config array to add value to

source

ipf.core.payment-releaser.adaptor.scheduler.kafka.expected-sources

trigger-type

ipf.core.payment-releaser.adaptor.scheduler.kafka.expected-trigger-types

If you do not include values for the Kafka header filters in configuration, all messages published by your Scheduling application will be consumed.

By default, configuration of these fields is set to ["any"] to make it obvious that any message will be passed through the filters.

The Kafka headers will be filtered-in if any of the strings in the configuration array are present for a particular header.

Example

ipf.core.payment-releaser.adaptor.scheduler.kafka {
    expected-sources = ["releaser", "other-system"]
    expected-trigger-types = ["INSTRUCTION", "TRANSACTION"]
}

If the above was your configuration then the following messages would be the filtering result:

Source Trigger-Type Filtered in or out?

releaser

INSTRUCTION

IN

other-system

INSTRUCTION

IN

other-system

TRANSACTION

IN

releaser

something

OUT

something

TRANSACTION

OUT

something

something

OUT

Further customisation

By default, there is a Criteria Spring Bean wired in that filters in all messages. The default Bean respects the configuration above. However, if you want to further customise message filtration, you can define your own Criteria spring bean in the consuming application. An example Criteria Spring Bean is shown below:

import com.iconsolutions.ipf.core.connector.criteria.AndCriteria;
import com.iconsolutions.ipf.core.connector.criteria.Criteria;
import com.iconsolutions.ipf.core.connector.criteria.MessageHeaderCriteria;
import org.springframework.context.annotation.Bean;

@Bean
public Criteria scheduledCommandFilteringCriteria() {
    return AndCriteria.create(
            MessageHeaderCriteria.create("source", "your-source"),
            MessageHeaderCriteria.create("trigger-type", "INSTRUCTION_PAYMENT_RELEASE"));
}

Implement ExternalSchedulingHelper Spring Beans

For your consuming application to know what to do with the consumed messages, it must implement one or more ExternalSchedulingHelper classes.

These define the concrete class of the command they support (with the supports method) and what should happen to the supported command (with the execute method).

An example implementation is below:

import com.iconsolutions.ipf.core.platform.api.models.ExternalTriggerCommand;
import com.iconsolutions.ipf.core.scheduler.client.connector.receive.kafka.helper.ExternalSchedulingHelper;
import lombok.RequiredArgsConstructor;

import java.util.concurrent.CompletionStage;

@RequiredArgsConstructor
public class TestExternalSchedulingHelper implements ExternalSchedulingHelper {

    private final MyExecutingSystem myExecutingSystem;

    @Override
    public boolean supports(ExternalTriggerCommand request) {
        return request instanceof TestExternalSchedulingCommand;
    }

    @Override
    public CompletionStage<Void> execute(ExternalTriggerCommand request) {
        return myExecutingSystem.execute(((TestExternalSchedulingCommand)request).getUnitOfWorkId())
                .thenApply(__ -> null);
    }
}

You must then add your concrete ExternalSchedulingHelper classes as Spring Beans.