Migration Steps for IPF-2024.3.0
Migration Steps for Flow Generation
Response and Reason codes
Reason and Response code enums are now generated ONLY in the model that they are used within. This will lead to two potential changes:
-
The existing core definitions of the 'AcceptOrReject' response codes and 'ISOReasonCodes' reason codes are now provided as standard implementations. This means that the packaging of these classes is now fixed and not model dependent. Hence any use of these classes will require the import declaration to change to:
-
com.iconsolutions.ipf.core.flow.domain.input.AcceptOrRejectCodes
-
com.iconsolutions.ipf.core.flow.domain.input.ISOReasonCodes
-
-
If using multi-model solutions, ensure that only the copy generated in the original model is referenced within the code. Similar to the above this may require changing the import packaging.
Importing Other Models
Previously when importing other models into a DSL based solution this was achieved by adding a block into the 'mps' module such as:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<version>3.1.2</version>
<executions>
<execution>
<id>unpack-ipf-business-functions-plugin</id>
<phase>initialize</phase>
<goals>
<goal>unpack</goal>
</goals>
<configuration>
<artifactItems>
<artifactItem>
<groupId>__groupid of target mps model goes here__</groupId>
<artifactId>__solution name of mps model goes here__</artifactId>
<version>${icon-business-functions-aggregator.version}</version>
<type>zip</type>
<overWrite>true</overWrite>
<outputDirectory>${plugin_home}</outputDirectory>
</artifactItem>
</artifactItems>
</configuration>
</execution>
Now the key change is that the artifactId fields is now populated with the constant 'mps' (the name of the module itself) rather than being the solution name of the project.
| Note that this change is ONLY applicable once the downstream solution being referenced has been upgraded to 2024.3.0 and is not dependent on the version of the consuming project. |
Migration Steps for Connectors
Resiliency Settings
-
withResiliencySettings(ResiliencySettings resiliencySettings)has been deprecated and has been replaced withFunction<ResiliencySettings, ResiliencySettings> resiliencySettingsCustomiserthe purpose of this is to make resiliency config available for connector operations api. -
Before
.withResiliencySettings(ResiliencySettings.builder()
.withMinimumNumberOfCalls(1)
.withMaxAttempts(3)
.withRetryOnSendResultWhen(s -> {
.withRetryOnSendResultWhen(outcome -> {
// will retry only in state 1
var response = ((DeliveryOutcome) s).getResponse();
var response = ((DeliveryOutcome) outcome).getResponse();
return FAILURE_REPLY_STRING.equals(response.getReceivedMessage().getMessage().getPayload());
})
.build())
-
Now the resiliency config should be passed back to the customiser. For example:
.withResiliencySettingsCustomiser(settings -> settings.toBuilder()
.withMinimumNumberOfCalls(1)
.withMaxAttempts(3)
.withRetryOnSendResultWhen(s -> {
.withResiliencyConfig(settings.getResiliencyConfig())
.withRetryOnSendResultWhen(outcome -> {
// will retry only in state 1
var response = ((DeliveryOutcome) s).getResponse();
var response = ((DeliveryOutcome) outcome).getResponse();
return FAILURE_REPLY_STRING.equals(response.getReceivedMessage().getMessage().getPayload());
})
.build())
The resiliency config will automatically be created and passed as the settings argument for use elsewhere
Local Directory Connectors and transport
-
FileHealthCheckConfigconfiguration can now be specified per individual file transport. This can be achieved by usingLocalDirectoryConnectorTransport.builder()and either:-
including
fileCheckConfigconfiguration block in the main file transport config file, or -
directly providing custom root path to
FileHealthCheckSettings create(ClassicActorSystemProvider actorSystem, String configRootPath)and including it into the builder by calling.withFileHealthCheckSettings(FileHealthCheckSettings settings)method on the builder
-
-
LocalDirectoryConnectorTransport(ActorSystem actorSystem, String name, FileIngestionConfiguration fileIngestionConfiguration)is deprecated, and it will be removed in the next release. Please useLocalDirectoryConnectorTransport.builder()instead -
LocalDirectoryTransportConfiguration(String configRootPath, Config config)is deprecated, and it will be removed in the next release. Please useLocalDirectoryTransportConfiguration(ClassicActorSystemProvider actorSystem, String configRootPath)instead -
static FileHealthCheckSettings createDefault(Config config)is deprecated, and it will be removed in the next release. Please usestatic FileHealthCheckSettings create(ClassicActorSystemProvider actorSystem, String configRootPath)instead -
withTransportConfigurationmethod onLocalDirectoryConnectorTransport.Builderis marked as deprecated and scheduled for removal -
LocalDirectoryConnectorTransportwill now filter out files that are currently being processed from its polls, enablingintervalto be safely set to durations shorter than expected processing times — seconds instead of hours.
Deprecating directory mapping from MongoDB directory-mapping collection
Directory mapping from MongoDB directory-mapping collection will be deprecated and moved to the ipf.file-ingestion.directory-mapping HOCON configuration that will be used for directory mappings.
From now on, it’s not allowed to have a file ingester without and appropriate directoryId in directory-mappings.
Migration steps
-
Backup all data from Mongo
directory-mappingcollection. -
For each custom ingester ensure adding related Mongo document data from
directory-mappingscollection to ingesters' .conf file. -
Restart application and check if there is no warnings in log with message
Missing required HOCON configuration: ipf.file-ingestion.directory-mappings. -
Make sure that log doesn’t contain warnings like:
-
Mongo directory-mappings documents value doesn’t exist in Hocon configuration. -
Mismatch found for Mongo directory-mappings documents value and Hocon configuration.
-
-
Delete Mongo
directory-mappingcollection if previous steps are fullfiled.
Http Connectors and transports
-
HttpConnectorTransport<T>.Buildershould use only the name, actor system and config root path when building transports.-
Use
<T> Builder<T> builder(String name, ClassicActorSystemProvider actorSystem, String configRootPath).
-
-
HttpReceiveConnectorTransportFactoryis deprecated and will be removed, so useHttpReceiveConnectorTransport.Builderinstead. -
withTransportConfigurationmethod onHttpConnectorTransport<T>.BuilderandHttpReceiveConnectorTransport.Builderis marked as deprecated and scheduled for removal -
Use
status-codes-treated-as-errorsto define status codes that are errors and can’t be ignored. These status codes will be use while building treatErrorResponseAsFailureWhen predicates. -
Use
<REQ_D, REQ_T, REP_D, REP_T> Builder<REQ_D, REQ_T, REP_D, REP_T> builder(String name, String configRootPath, ClassicActorSystemProvider actorSystem)when building Request-Reply Send connectors.
JMS Connectors and transports
-
JMS Connector Transport builder should use only name, actor system, config root path and connection factory.
-
JmsConnectorTransportFactoryis deprecated and will be removed, so useJmsConnectorTransport.Builderinstead. -
JmsReceiveConnectorTransportFactoryis deprecated and will be removed, so useJmsReceiveConnectorTransport.Builderinstead. -
withTransportConfigurationmethod onJmsAckReceiveConnectorTransport.Builder,JmsConnectorTransport.BuilderandJmsReceiveConnectorTransport.Builderis marked as deprecated and scheduled for removal
Migration Steps for Icon Akka Plugins
Akka Discovery MongoDB
akka.discovery.akka-mongodb.uri, akka.discovery.akka-mongodb.set-ssl-context and akka.discovery.akka-mongodb.ssl-context will now default to their ipf.mongodb counterparts (ipf.mongodb.url, ipf.mongodb.set-ssl-context and ipf.mongodb.ssl-context, respectively) and no longer have to be manually set if the counterparts are provided.
Akka Lease MongoDB
akka.coordination.lease.mongodb.url, akka.coordination.lease.mongodb.set-ssl-context and akka.coordination.lease.mongodb.ssl-context will now default to their ipf.mongodb counterparts (ipf.mongodb.url, ipf.mongodb.set-ssl-context and ipf.mongodb.ssl-context, respectively) and no longer have to be manually set if the counterparts are provided.
Akka Persistence MongoDB
iconsolutions.akka.persistence.mongodb.read-concern has been removed, use readConcernLevel option in the connection string to set the read concern.
iconsolutions.akka.persistence.mongodb.url, iconsolutions.akka.persistence.mongodb.set-ssl-context and iconsolutions.akka.persistence.mongodb.ssl-context will now default to their ipf.mongodb counterparts (ipf.mongodb.url, ipf.mongodb.set-ssl-context and ipf.mongodb.ssl-context, respectively) and no longer have to be manually set if the counterparts are provided.
Migration Steps for IPF Processing Data Version 2
All core IPF applications are able to consume data from both the V2 and V1 IPF Processing Data model. By default, all IPF Processing Data Egress plugins will export data using the V2 data model. If you have any custom applications that consume from IPF Processing Data, the following steps should be taken.
Set Egress Applications to use V1
Your consuming applications cannot handle the V2 data model, therefore for now you should continue to export using the V1 data model. For all applications that utilise the IPF Processing Data Egress plugins, configure ipf.processing-data.egress.schema-version = 1 to continue to produce data using the V1 data model.
Update consuming applications
Update each application that consumes from IPF Processing Data so that they can handle both the V2 and V1 data model. For more information, see the consume IPF Processing Data guide.
Set Egress Applications to use V2
Once all your consuming applications are able to handle both the V2 and V1 data model, you can safely update your producers to export messages using the V2 data model. This can be done by configuring ipf.processing-data.egress.schema-version = 2
Event Processor ID Resolution Fix in Egress Journal Processor
Issue Overview
In versions prior to 2024.3, the ipf-processing-data-egress-journal-processor module had an inconsistent resolution of the event-processor.id configuration property. Depending on Java classpath resolution at runtime, the event processor ID would resolve to either EventProcessor (incorrect value) or IpfProcessingDataEventProcessor (intended value).
Resolution
Version 2024.3 fixes this inconsistency. The event processor ID will now correctly resolve to IpfProcessingDataEventProcessor in all cases.
Migration Impact
Services that previously resolved to EventProcessor require configuration changes during the 2024.3 upgrade to prevent their Egress Journal Processors from reprocessing the entire event journal.
Determining If Action Is Required
| You need to perform this check for each orchestration application, otherwise you risk issues in production. |
You can verify if your application needs configuration changes using either of these methods.
For users with network access to service instances, run:
curl -s localhost:8080/actuator/info \
| grep -o -P '"event.processor.id":"EventProcessor"'
For users with MongoDB access:
mongo <connection_params_omitted> --eval\
'db.mongoOffset.find({"_id.eventProcessorId":"EventProcessor"})'