Documentation for a newer release is available. View Latest

RUN 1 - Ejecutar en Kubernetes

Prerrequisitos

Para ejecutar este tutorial necesitas un clúster de Kubernetes local. Algunas opciones posibles:

Instala e inicia tu solución preferida para ejecutar un clúster de Kubernetes local.

Kubernetes (K8s) es un sistema open‑source para automatizar el despliegue, escalado y gestión de aplicaciones contenerizadas.

Desplegar IPF Tutorial en Kubernetes

Requisitos

Paso 1 - Crear un Namespace

kubectl create namespace ipf-tutorial

Paso 2 - Crear Service Account

Crea un archivo 'serviceAccount.yaml' y copia el siguiente manifiesto de cuenta de servicio con permisos de admin.

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: ipf-tutorial
  namespace: ipf-tutorial
rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["get", "watch", "list"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ipf-tutorial
  namespace: ipf-tutorial
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: ipf-tutorial
  namespace: ipf-tutorial
subjects:
  - kind: ServiceAccount
    name: ipf-tutorial
    namespace: ipf-tutorial
roleRef:
  kind: Role
  name: ipf-tutorial
  apiGroup: rbac.authorization.k8s.io

La 'serviceAccount.yaml' crea un Role ipf-tutorial, un ServiceAccount 'ipf-tutorial' y lo vincula al Role.

La cuenta 'ipf-tutorial' tiene permisos para consultar la API de K8s sobre pods de IPF dentro del namespace.

Ahora crea la cuenta con kubectl.

kubectl apply -f serviceAccount.yaml

Paso 3 - Crear ConfigMap

Crea un archivo 'configmap.yaml' y copia el siguiente manifiesto de configuración.

apiVersion: v1
kind: ConfigMap
metadata:
  name: ipf-tutorial-service-cm
  namespace: ipf-tutorial
data:
  application.conf: |
    akka {
      loglevel = "INFO"
      cluster {
        seed-nodes = []
        sharding {
          distributed-data.majority-min-cap = 2
        }
      }
      discovery {
        kubernetes-api {
          pod-label-selector = "app=%s"
        }
      }
      actor.provider = cluster
    }
    akka.remote.artery.canonical.hostname = ${POD_IP}
    akka.management {
      health-checks {
        readiness-path  = "health/ready"
        liveness-path   = "health/alive"
      }
      cluster.bootstrap {
        contact-point-discovery {
          service-name              = "ipf-tutorial-service"
          discovery-method          = kubernetes-api
          required-contact-point-nr = 2
        }
      }
    }
    management.endpoints.web.exposure.include = "*"
    flow-restart-settings {
      min-backoff = 1s
      max-backoff = 5s
      random-factor = 0.25
      max-restarts = 5000
      max-restarts-within = 3h
    }
    ipf {
      mongodb.url = "mongodb://ipf-mongo:27017/ipf"
      processing-data.egress {
        enabled = true
        transport = http
        http {
          client {
            host = "ipf-developer-service"
            port = 8081
            endpoint-url = "/ipf-processing-data"
          }
        }
      }
    }
  logback.xml: |
    <?xml version="1.0" encoding="UTF-8"?>
    <configuration>
      <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
        <target>System.out</target>
        <encoder>
          <pattern>[%date{ISO8601}] [%level] [%logger] [%marker] [%thread] - %msg MDC: {%mdc}%n</pattern>
        </encoder>
      </appender>
      <appender name="ASYNC" class="ch.qos.logback.classic.AsyncAppender">
        <queueSize>8192</queueSize>
        <neverBlock>true</neverBlock>
        <appender-ref ref="CONSOLE" />
      </appender>
      <logger name="akka" level="WARN" />
      <root level="INFO">
        <appender-ref ref="ASYNC"/>
      </root>
    </configuration>

Ahora aplica el config map:

kubectl apply -f configmap.yaml

Paso 4 - Crear imagePullSecret

Sustituye docker-server, docker-username y docker-password por tus valores.
kubectl create secret docker-registry registrysecret --docker-server=**** --docker-username=********* --docker-password=******* --namespace ipf-tutorial

Paso 5 - Crear MongoDB

Crea un archivo 'infrastructure.yaml' y copia el siguiente manifiesto. Crea MongoDB requerido por la aplicación del tutorial.

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mongo
  namespace: ipf-tutorial
spec:
  selector:
    matchLabels:
      role: mongo
  serviceName: "ipf-mongo"
  replicas: 1
  template:
    metadata:
      labels:
        role: mongo
    spec:
      imagePullSecrets:
        - name: "registrysecret"
      terminationGracePeriodSeconds: 10
      containers:
        - name: mongo
          image: mongo:latest
          imagePullPolicy: Always
          ports:
            - containerPort: 27017
        - name: mongo-exporter
          image: bitnami/mongodb-exporter:0.11.2
          imagePullPolicy: IfNotPresent
          ports:
            - name: mongo-exporter
              containerPort: 9216
              protocol: TCP
          env:
            - name: MONGODB_URI
              value: "mongodb://localhost:27017"
---
apiVersion: v1
kind: Service
metadata:
  name: ipf-mongo
  namespace: ipf-tutorial
  labels:
    name: ipf-mongo
    type: mongo
spec:
  ports:
    - port: 27017
      name: mongo
      protocol: TCP
      targetPort: 27017
  selector:
    role: mongo
kubectl apply -f infrastructure.yaml

Paso 6 - Crear Developer App

Crea un archivo developerApp.yaml y copia el siguiente manifiesto. Crea la developer app necesaria para ver eventos de flujo.

${registry_service}: sustituye por la ubicación de tu registro docker

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ipf-developer-service
  namespace: ipf-tutorial
  annotations:
    prometheus.io/scrape: "true"
    prometheus.io/path: "/"
    prometheus.io/port: "9001"
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ipf-developer-service
      product: ipfv2
  template:
    metadata:
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/path: "/"
        prometheus.io/port: "9001"
      labels:
        app: ipf-developer-service
        product: ipfv2
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: app
                    operator: In
                    values:
                      - ipf-developer-service
              topologyKey: kubernetes.io/hostname
      securityContext:
        fsGroup: 1000
        runAsUser: 1000
      serviceAccountName: ipf-tutorial
      imagePullSecrets:
        - name: "registrysecret"
      containers:
        - name: ipf-developer-service
          image: $\{registry_service}/ipf-developer-app:latest
          imagePullPolicy: Always
          ports:
            - name: actuator
              containerPort: 8081
          env:
            - name: "POD_NAME"
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: "POD_IP"
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: "KUBERNETES_NAMESPACE"
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: "IPF_JAVA_ARGS"
              value: "-Dma.glasnost.orika.writeClassFiles=false -Dma.glasnost.orika.writeSourceFiles=false"
          resources:
            limits:
              memory: "2Gi"
            requests:
              memory: "2Gi"
              cpu: "1000m"
          volumeMounts:
            - mountPath: /ipf-developer-app/conf/logback.xml
              name: config-volume
              subPath: logback.xml
            - mountPath: /ipf-developer-app/conf/application.conf
              name: config-volume
              subPath: application.conf
      volumes:
        - name: config-volume
          configMap:
            name: ipf-developer-service-cm
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: ipf-developer-service-cm
  namespace: ipf-tutorial
data:
  application.conf: |
    flow-restart-settings {
     min-backoff = 1s
     max-backoff = 5s
     random-factor = 0.25
     max-restarts = 5
     max-restarts-within = 10m
    }
    spring.data.mongodb.uri = ${?ipf.mongodb.url}
    actor-system-name = ipf-developer
    ipf.mongodb.url = "mongodb://ipf-mongo:27017/ipf"
    ods.security.oauth.enabled = false
    application.write.url="http://localhost:8080"
    ipf.processing-data.ingress.transport=http
  logback.xml: |
    <?xml version="1.0" encoding="UTF-8"?>
    <configuration>
      <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
        <target>System.out</target>
        <encoder>
          <pattern>[%date{ISO8601}] [%level] [%logger] [%marker] [%thread] - %msg MDC: {%mdc}%n</pattern>
        </encoder>
      </appender>
      <appender name="ASYNC" class="ch.qos.logback.classic.AsyncAppender">
        <queueSize>8192</queueSize>
        <neverBlock>true</neverBlock>
        <appender-ref ref="CONSOLE" />
      </appender>
      <logger name="akka" level="WARN" />
      <root level="INFO">
        <appender-ref ref="ASYNC"/>
      </root>
    </configuration>
---
apiVersion: v1
kind: Service
metadata:
  name: ipf-developer-service
  namespace: ipf-tutorial
  labels:
    name: ipf-developer-service
spec:
  type: NodePort
  ports:
    - protocol: TCP
      port: 8081
      targetPort: 8081
      nodePort: 30200
      name: ipf-developer-service
  selector:
    app: ipf-developer-service

Paso 7 - Crear un Deployment

Crea un archivo 'deployment.yaml' y copia el siguiente manifiesto.

${registry_service}: sustituye por la ubicación del registro docker ${tutorial-service-version}: versión de la app del tutorial

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ipf-tutorial-service
  namespace: ipf-tutorial
  annotations:
    prometheus.io/scrape: "true"
    prometheus.io/path: "/"
    prometheus.io/port: "9001"
spec:
  replicas: 3
  selector:
    matchLabels:
      app: ipf-tutorial-service
      product: ipfv2
  template:
    metadata:
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/path: "/"
        prometheus.io/port: "9001"
      labels:
        app: ipf-tutorial-service
        product: ipfv2
    spec:
      #      affinity:
      #        podAntiAffinity:
      #          requiredDuringSchedulingIgnoredDuringExecution:
      #            - labelSelector:
      #                matchExpressions:
      #                  - key: app
      #                    operator: In
      #                    values:
      #                      - ipf-tutorial-service
      #              topologyKey: kubernetes.io/hostname
      securityContext:
        fsGroup: 1000
        runAsUser: 1000
      serviceAccountName: ipf-tutorial
      imagePullSecrets:
        - name: "registrysecret"
      containers:
        - name: ipf-tutorial-service
          image: ${registry_service}/ipf-tutorial-app:${tutorial-service-version}
          imagePullPolicy: Always
          ports:
            - name: actuator
              containerPort: 8080
            - name: akka-artery
              containerPort: 55001
            - name: akka-management
              containerPort: 8558
            - name: akka-metrics
              containerPort: 9001
          livenessProbe:
            failureThreshold: 5
            httpGet:
              path: /health/alive
              port: akka-management
              scheme: HTTP
            initialDelaySeconds: 30
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /health/ready
              port: akka-management
              scheme: HTTP
            initialDelaySeconds: 30
            periodSeconds: 10
            successThreshold: 10
            timeoutSeconds: 1
          env:
            - name: "POD_NAME"
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: "POD_IP"
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: "KUBERNETES_NAMESPACE"
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: "IPF_JAVA_ARGS"
              value: "-Dma.glasnost.orika.writeClassFiles=false -Dma.glasnost.orika.writeSourceFiles=false"
          resources:
            limits:
              memory: "2Gi"
            requests:
              memory: "2Gi"
              cpu: "1000m"
          volumeMounts:
            - mountPath: /ipf-tutorial-app/conf/logback.xml
              name: config-volume
              subPath: logback.xml
            - mountPath: /ipf-tutorial-app/conf/application.conf
              name: config-volume
              subPath: application.conf
      volumes:
        - name: config-volume
          configMap:
            name: ipf-tutorial-service-cm
---
apiVersion: v1
kind: Service
metadata:
  name: ipf-tutorial-service
  namespace: ipf-tutorial
  labels:
    name: ipf-tutorial-service
spec:
  type: NodePort
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 8080
      nodePort: 30100
      name: ipf-tutorial-service
  selector:
    app: ipf-tutorial-service
Las reglas de afinidad (comentadas) sirven para especificar dónde/en qué nodos deben programarse los pods.

En este despliegue de IPF en Kubernetes hemos usado:

  1. securityContext para el pod IPF

  2. Liveness y readiness probes para monitorizar la salud del pod IPF

  3. Archivos de configuración de aplicación y logging almacenados como ConfigMap

Crea el deployment con kubectl.

kubectl apply -f deployment.yaml

Comprueba el estado del deployment:

kubectl get deployments -n ipf-tutorial

Obtén los detalles del deployment:

kubectl describe deployments --namespace=ipf-tutorial

Envía un pago con el siguiente comando (sustituye la IP del nodo del clúster):

curl -X POST http://<cluster-node-ip-address>:30100/submit | jq

Consulta el pago en la developer app en:

http://<cluster-node-ip-address>:30200/explorer.html
Consulta la documentación de tu opción de clúster para obtener la IP del nodo. Algunas opciones: kubectl get nodes -o wide, minikube ip.