HTM Server Deployment Configuration
1. Image Name and Version
To deploy the Human Task Manager (HTM) server, you need to pull the Docker image from the ICON Solutions container registry.
Image name:
registry.ipf.iconsolutions.com/human-task-manager-app
Versioning:
We follow semantic versioning using the MAJOR.MINOR.PATCH format (e.g. 1.3.9). You should always reference a specific version tag to ensure deployment consistency and avoid unintended updates.
image:
name: registry.ipf.iconsolutions.com/human-task-manager-app
tag: 1.3.9
|
Avoid using the |
2. Example Kubernetes Manifests
The following example demonstrates a basic deployment of the Human Task Manager (HTM) server using Kubernetes manifests. These can be used as a starting point for writing Helm charts or direct K8s deployments.
This example includes:
-
ConfigMapfor application configuration -
Deploymentfor the HTM application -
Servicefor exposing the HTM app internally -
Ingressconfiguration
htm-cmapiVersion: v1
kind: ConfigMap
metadata:
name: htm-cm
data:
application.conf: |-
ipf.mongodb.url = "${ipf.mongodb.url}"
ipf.htm {
event-processor.delegating.enabled = ${ipf.htm.event-processor.delegating.enabled}
mongodb.purging.time-to-live = 15minutes
}
management.health.mongo.enabled = false
event-processor {
restart-settings {
min-backoff = 500 millis
max-backoff = 1 seconds
}
}
akka {
actor.provider = cluster
remote.artery.canonical.hostname = ${POD_IP}
# Use Kubernetes API to discover the cluster
discovery {
kubernetes-api {
pod-label-selector = "app=%s"
}
}
management {
# use the Kubernetes API to create the cluster
cluster.bootstrap {
contact-point-discovery {
discovery-method = kubernetes-api
service-name = ${AKKA_CLUSTER_BOOTSTRAP_SERVICE_NAME}
required-contact-point-nr = 1
required-contact-point-nr = ${?REQUIRED_CONTACT_POINT_NR}
}
}
# available from Akka management >= 1.0.0
health-checks {
readiness-path = "health/ready"
liveness-path = "health/alive"
}
}
cluster {
seed-nodes = []
downing-provider-class = "akka.cluster.sbr.SplitBrainResolverProvider"
split-brain-resolver {
active-strategy = keep-majority
stable-after = 20s
}
sharding {
remember-entities = off
handoff-timeout = 8s
least-shard-allocation-strategy.rebalance-absolute-limit = 20
rebalance-interval = 2s
number-of-shards = 100,
distributed-data.majority-min-cap = ${akka.management.cluster.bootstrap.contact-point-discovery.required-contact-point-nr}
snapshot-after = 50
}
}
}
logback.xml: |-
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
<target>System.out</target>
<encoder>
<pattern>[%date{ISO8601}] [%level] [%logger] [%marker] [%thread] - %msg%n</pattern>
</encoder>
</appender>
<logger name="akka.stream.scaladsl.RestartWithBackoffSource" level="warn"/>
<logger name="com.iconsolutions.ipf.core.connector" level="INFO"/>
<logger name="com.iconsolutions.ipf.core.platform.read.processor" level="INFO"/>
<logger name="com.iconsolutions.akka.persistence.mongodb" level="INFO"/>
<logger name="com.iconsolutions.ipf.htm.notification.autoconfig.TaskNotificationAutoConfiguration" level="WARN"/>
<logger name="com.iconsolutions.ipf.htm.task.task_manager.handlers.commands.processors.builtin.TaskManagerActionRevivalProcessor" level="DEBUG"/>
<appender name="ASYNC" class="ch.qos.logback.classic.AsyncAppender">
<queueSize>8192</queueSize>
<neverBlock>true</neverBlock>
<appender-ref ref="CONSOLE" />
</appender>
<root level="info">
<appender-ref ref="ASYNC"/>
</root>
</configuration>
apiVersion: v1
kind: Service
metadata:
labels:
app: human-task-manager
product: ipfv2
name: human-task-manager
spec:
ports:
- name: server-port
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: human-task-manager
product: ipfv2
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: human-task-manager
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: "/"
prometheus.io/port: "9001"
labels:
app: human-task-manager
product: ipfv2
spec:
replicas: 1
selector:
matchLabels:
app: human-task-manager
product: ipfv2
template:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: "/"
prometheus.io/port: "9001"
labels:
app: human-task-manager
product: ipfv2
name: human-task-manager
spec:
containers:
- name: htm
image: registry.ipf.iconsolutions.com/human-task-manager-app
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8080
- name: debug-port
containerPort: 5005
livenessProbe:
failureThreshold: 5
httpGet:
path: /actuator/health/liveness
port: http
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 2
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 5
httpGet:
path: /actuator/health/readiness
port: http
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
env:
- name: "POD_NAME"
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: "IPF_JAVA_ARGS"
value: " -XX:+UseContainerSupport -XX:MaxRAMPercentage=60 -XX:InitialRAMPercentage=60
-XX:-PreferContainerQuotaForCPUCount -Dma.glasnost.orika.writeClassFiles=false
-Dma.glasnost.orika.writeSourceFiles=false"
- name: "POD_IP"
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: "KUBERNETES_NAMESPACE"
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: "AKKA_CLUSTER_BOOTSTRAP_SERVICE_NAME"
valueFrom:
fieldRef:
fieldPath: metadata.labels['app']
resources:
requests:
memory: 4Gi
cpu: "4"
limits:
memory: 4Gi
volumeMounts:
- mountPath: /human-task-manager-app/conf/logback.xml
name: config-volume
subPath: logback.xml
- mountPath: /human-task-manager-app/conf/application.conf
name: config-volume
subPath: application.conf
volumes:
- name: config-volume
configMap:
name: htm-cm
- name: keystore
secret:
secretName: keystore
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: human-task-manager
annotations:
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: htm.${ipf.ingress-domain}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: human-task-manager
port:
number: 8080
|
Placeholders such as |
|
These are minimal manifests for demonstration. You may need to extend them based on your infrastructure (e.g. secrets, TLS, persistent storage). |
3. Infrastructure Requirements
The HTM server depends on external infrastructure components that must be available and configured correctly prior to deployment.
MongoDB
HTM uses MongoDB as its primary data store.
-
You must provision a MongoDB instance accessible from the cluster.
-
The MongoDB connection string should be provided via the configuration file using the following placeholder
ipf.mongodb.url
Kafka
HTM can send notifications when tasks are closed, using Apache Kafka.
-
You must have a Kafka cluster accessible from your environment.
-
Create Kafka topics for closed task notifications and task registrations.
-
Configure the topic names in the config file via variable
htm.kafka.producer.topicandipf.htm.async.register-task.kafka.producer.topic
4. Base Resource Requirements
To ensure stable performance, the HTM server requires the following baseline resources:
-
CPU: Minimum of 4 cores
-
Memory: Recommended 4 GB
|
These values should be considered the lower bound for production environments, the actual values should always be determined by running adequate game-day load tests. |