Documentation for a newer release is available. View Latest

Migration guide from DPS v1 to DPS v2

The following steps are going to describe what is necessary for services to move from DPS v1 to DPS v2.

Remove unused modules

Repository modules previously used by DPS v1, setting-<setting-name>-repository should be removed from projects as they are no longer used in DPS v2. The whole configuration is now under domain modules setting-<setting-name>-domain.

Create indexes

In DPS v1 the indexes for each setting were provided out of the box. In DPS v2 indexes can be created through HOCON config. For each setting there should be a set of indexes created through reference.conf of a domain module.

ipf.dps.mongodb.index-config.<setting-type> {
    index-1 = ["status:ASC"]
    index-2 = ["processingEntity:ASC"]
    index-3 = ["values.payload.field1:ASC"]
    index-4 = ["values.payload.field2.field3:ASC"]
}

Fields that are under setting payload need to be referred as values.payload.<field-name> under indexes.

The DPS Index Creation page contains more information regarding indexes creation.

Notifications

DPS v2 can provide notification after certain operation on a setting is performed. These notifications are disabled by default and can be set up with the following properties:

ipf.dps.notification-service.enabled = true # notification for all CRUD operations are sent
ipf.dps.notification-service.send-notifications-for-scheduled-settings = true # notification for all operation on scheduled settings are sent

You can also define your own properties in order to enable/disable notification for each setting definition.

    @Value("${ipf.service.should-save-history.dpssampe-setting:true}")
    private boolean historyEnabled;

    @Value("${ipf.service.should-send-notification.dpssample-setting}")
    private boolean notificationsEnabled;

    @Bean
    public SettingDefinition<DpsSampleSetting> dpsSampleSettingDefinition() {
        return SettingDefinition.<DpsSampleSetting>builder()
                .settingType(DpsSampleSetting.SETTING_TYPE)
                .collectionName("setting-" + DpsSampleSetting.SETTING_TYPE)
                .idFunction(setting -> setting.getProcessingEntity() + "-" + setting.getPayload().getFullName())
                .historyEnabled(historyEnabled) (1)
                .notificationsEnabled(notificationsEnabled) (2)
                .payloadClass(DpsSampleSetting.class)
                .payloadExample(payloadExample())
                .searchableFieldsClass(DpsSampleSettingSearchableFields.class)
                .searchableFieldsExample(dpsSampleSettingQueryExample())
                .build();
    }
1 history enabled - Whether to track history for the setting changes or not. It’s enabled by default.
2 notifications enabled - Whether to send Kafka notifications when setting is created, updated or deleted. It’s off by default as DPS v1 didn’t have this functionality.

Kafka needs to be deployed in order to send/receive notifications. Example of how to configure kafka topic and notifications:

ipf.dps {
  notification-service {
    enabled = true
    kafka {
      producer {
        topic = DPS_CRUD_NOTIFICATION
        restart-settings = ${common-flow-restart-settings}
        kafka-clients {
          group.id = dps-crud-notification-group
        }
      }
    }
  }
}

common-flow-restart-settings {
  min-backoff = 1s
  max-backoff = 5s
  random-factor = 0.25
  max-restarts = 5
  max-restarts-within = 10m
}

More on how to implement receive endpoint for these Kafka notifications can be found here DPS Client Notification

Connector and Direct Queries

Most services that used DPS v1 implemented their own connectors and direct queries. DPS v2 provides these out of the box.

Detailed description on how to import these can be found here Connector and Direct Implementations Suggestion is to use DPS v2 interfaces instead of custom service implementations to avoid any misconfiguration.

Best approach would be to switch custom service implementations to use DpsSearchClientPort, DpsCrudClientPort, DpsHistoryClientPort and other interfaces provided, and to configure specific properties (connector or direct) as described in the supplied page above.

Databases

DPS v2 introduces scheduled settings and uses Persistence Scheduler for their maintenance. In order to properly configure persistence scheduler there is a need to create two new collections and their indexes. The following configuration should be added to your database (example cosmos-schema.yaml):

- name: jobSpecification
  throughput: 400
  default_ttl_seconds: 3600 # should be tied to deleteTime
  additional_indexes:
    - keys:
        - deleteTime
      unique: false
    - keys:
        - _id.jobSpecificationId
        - _id.lastUpdated
      unique: false
- name: jobExecutionStatus
  throughput: 400
  default_ttl_seconds: 3600 # should be tied to deleteTime
  additional_indexes:
    - keys:
        - deleteTime
      unique: false
    - keys:
        - _id.jobSpecificationId
        - _id.lastUpdated
      unique: false

MongoDB

Since DPS v2 uses change streams, MongoDB must have replica set in order for change streams to work. This is more a note for BDD (local) test setup than production configuration (which will always have replica set configured).

The following script should be added to ipf-mongo container volumes:

# Initiate replication on the MongoDB node

mongo --eval "rs.initiate()"

until mongo --eval 'rs.status()' | grep -q PRIMARY
do
  echo "Waiting for replication elections to finish......"
  sleep 1s
done

echo "Election is done, populating initial test data..."
sleep 2s

and command: --replSet test --bind_ip_all should be added to ipf-mongo container setup.