Core - Improvements, Changes & Fixes
This page covers core improvements, bug fixes, changes and fixes provided in release IPF-2024.2.0.
Core improvements, bug fixes & changes
New
-
IPF Platform: Journal processor documentation added to Flo Starter Proyectos
-
Connector: Configuration property added for timing out call to the
CorrelationServiceinSendConnector. Default value provided atipf.connector.default-send-connector.correlation-stage-timeoutof 5s.If the value provided for correlation-stage-timeoutis not less than thecall-timeoutspecified for a particular connector the correlation stage timeout will be reduced to be less than the call timeout (200ms less) and the updated value will be logged alongside a warning message. -
MongoDB: Added capability to set the commit quorum. Can be globally set for all index creation with
ipf.mongodb.commit-quorum. See individual component documentation for how to override per-component. -
IPF Processing Data: Exporters can produce data to different Kafka topics, configurable per data type.
-
Flo-lang and Akka-persistence-mongo-db: Added configurable purging functionality for the journal and snapshot collections. Default functionality it to not purge documents from either collection. Implementation utilises Mongo and Cosmos ttl indexes which will need to be created manually. Configuration guides found in docs:
-
Persistent Scheduler added timezone support to persistent scheduler
Changed
-
IPF File Poller - Breaking change - To support multiple processing entities the IPF File Poller can now poll from multiple locations. This means the following config has now changed from a single item to a list of items:
ipf.file-poller→ipf.file-poller.pollers -
Dynamic Settings Workflow - Redundant call to file converter during file ingestion was removed from file processor. This issue was affecting CSM Reachability Data Ingestion: FileEntrySkippedEvent and PartyEntityDirectorySubTypeMappingSkippedEvent file processing events were raised twice and errors were logged twice.
-
Dynamic Settings Workflow - Added new event and enriched existing events. It will improve monitoring, for already existing
FileEntrySkippedEventwith type and fileName,ProcessingCompleteEventwith type, outcome, file_name and process_name andProcessingFailedEventwith processName and fileName. Also added newFileEntryProcessedEventwith metrics type and fileName(more about it in Csm Reachability and Industry Data ingestion) -
Replacing Caffeine sync cache implementation with async cache implementation to fix multiple calls to callback in
getOrDefaultmethod inipf-cache-caffeinemodule. -
Updated
EventProcessorStreamto usemapAsyncPartitionedinstead of the previousmapAsyncoperator. Now, even when processing parallelism is enabled we won’t be processing related events in parallel within a single stream, thus journal processors to be safely parallelised. -
Ipf-file-manager - S3FileReader fixed to be able to download bigger files.
-
Connector:
IngestedFileasReceivedMessage’s receive context has been replaced with `IngestedFileContext -
IPF Processing Data: Updated
MdsWrapperclass’s generic type constraint. The generic parameterTmust now implementjava.io.Serializable-
Changed from
MdsWrapper<T>toMdsWrapper<T extends Serializable>
-
-
IPF Transaction Cache - changed names of indexes which are being created on transactionCacheEntry mongodb collection:
-
findByTypeAndHashIndexrenamed tohash_1_type_1 -
findByTypeAndHashAndMessageIdIndexrenamed tohash_1_type_1_messageId_1
-
Fixed
-
Connector - Memory consumption in
LocalDirectoryConnectorTransportcomponent -
IPF Archiver - Fixed an issue where errors delivering archive bundles to Kafka were not correctly propagated, resulting in missing archive bundles.
Configuration
Deprecated |
Backward compatibility is maintained for this release but |
Deprecated |
Backward compatibility is maintained for this release but |
Introduced |
Defaults to Before switching an existing system to use |
Introduced
|
These transports default to existing Kafka transport configuration, with the default Kafka clients, and by extension the topics, can be configured per data type, e.g. all message logs can go to a different topic, e.g. This change is non-breaking and behaves as before unless explicitly configured to use different topics. |