Kubernetes

1. Configuration

For deployment with Kubernetes, there are a number of configuration files which can be used with a previously configured Kubernetes deployment.

1.1. PODs

In this example, we want to scale up the cluster properly we have defined 1 Replication Controller and 3 Stateful Sets:

  • Meta (StatefulSet)

  • Conflict Manager (Replication Controller)

  • KVDS (StatefulSet)

  • Query Engine (StatefulSet)

Those configs have the suitable PODs defined inside them, services, containers and also the templates to the Persistent Volumes needed.

1.1.1. Meta StatefulSet

The statefulset describes the processes who have control of the database. Also included is the definition of the services to export outside the kubernetes cluster, and the connection port for the system metrics (30003).

apiVersion: v1
kind: Service
metadata:
  name: meta-service
  labels:
    app: meta
spec:
  type: NodePort
  selector:
    app: meta
  ports:
    # default nodePort range is 30_000-32_767
  - name: zookeeper
    protocol: TCP
    port: 2181
    targetPort: 2181
    nodePort: 32181
  - name: kvms
    protocol: TCP
    port: 44000
    targetPort: 44000
    nodePort: 30044
  - name: grafana
    protocol: TCP
    port: 3000
    targetPort: 3000
    nodePort: 30003
  - name: prometheus
    protocol: TCP
    port: 9091
    targetPort: 9091
    nodePort: 30091
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: meta
spec:
  serviceName: "meta-service"
  replicas: 1
  selector:
    matchLabels:
      app: meta
  template:
    metadata:
      labels:
        app: meta
    spec:
      terminationGracePeriodSeconds: 10
      securityContext:
        fsGroup: 1000
      containers:
          - name: meta
            image: registry.leanxcale.com/leanxcale:k8s
            imagePullPolicy: Always
            securityContext:
              privileged: true
            lifecycle:
              postStart:
                  exec:
                      command: ["bash", "-c", "sleep 5; /lx/LX-BIN/scripts/startAll.sh ZK; \
                                               sleep 1; /lx/LX-BIN/scripts/startAll.sh LgCmS; \
                                               sleep 1; /lx/LX-BIN/scripts/startAll.sh MtM; \
                                               sleep 1; /lx/LX-BIN/scripts/startAll.sh KVMS; \
                                               sleep 5; /lx/monitor/monitor.sh; sleep 1; /lx/metrics/metricsExporter.sh"]
            livenessProbe:
              tcpSocket:
                port: 2181
                port: 13400
                port: 13200
                port: 13300
                port: 44000
                # port: 3000
              initialDelaySeconds: 120
              periodSeconds: 60
            volumeMounts:
            - name: local-pvc-lgcms
              mountPath: "/lx/LX-DATA/tm_logger_cs_1/"

  volumeClaimTemplates:
  - metadata:
      name: local-pvc-lgcms
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi
      storageClassName: local-storage

1.1.2. Conflict Manager RC

In the Conflict Manager process, the memory used is defined by the CFLMMEM_FORCE=2 environment variable.

You should set the value to the amount of memory that you need.

apiVersion: v1
kind: ReplicationController
metadata:
    name: cflm
spec:
    replicas: 1
    selector:
        app: cflm
    template:
        metadata:
            name: pod-cflm
            labels:
                app: cflm
        spec:
            containers:
                - name: cflm
                  image: registry.leanxcale.com/leanxcale:k8s
                  imagePullPolicy: Always
                  ports:
                  - containerPort: 11100
                  - containerPort: 9101
                  - containerPort: 9089
                  env:
                  - name: ZK_FORCE
                    value: "$(META_SERVICE_SERVICE_HOST)"
                  - name: KVMS_FORCE
                    value: "$(META_SERVICE_SERVICE_HOST)"
                  - name: CFLMMEM_FORCE
                    value: "2"
                  lifecycle:
                    postStart:
                        exec:
                            command: ["bash", "-c", "sleep 5; /lx/LX-BIN/scripts/startAll.sh CflM; \
                                                     sleep 1; /lx/monitor/onlynode.sh; sleep 1; /lx/metrics/metricsExporter.sh"]
                  livenessProbe:
                    tcpSocket:
                      port: 11100
                    initialDelaySeconds: 120
                    periodSeconds: 60
            restartPolicy: Always

1.1.3. KVDS StatefulSet

The amount memory used by the KVDS process is defined by the KVDSMEM_FORCE=3 environment variable.

You should set the value to the amount of memory that you need.

Also here you can see that we have configured a volume to be attached to the container, and the corresponding Persistent Volume Claim Template.

In this case the, the mount point used is (/lx/LX-DATA/kivi_ds_data_dir_1).

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: kvds
spec:
  serviceName: "kvds-service"
  replicas: 3
  selector:
    matchLabels:
      app: kvds
  template:
    metadata:
      labels:
        app: kvds
    spec:
      terminationGracePeriodSeconds: 10
      securityContext:
        fsGroup: 1000
      containers:
      - name: kvds
        image: registry.leanxcale.com/leanxcale:k8s
        imagePullPolicy: Always
        ports:
        - containerPort: 9992
        - containerPort: 9101
        - containerPort: 9089
        env:
        - name: ZK_FORCE
          value: "$(META_SERVICE_SERVICE_HOST)"
        - name: KVMS_FORCE
          value: "$(META_SERVICE_SERVICE_HOST)"
        - name: KVDSMEM_FORCE
          value: "3"
        lifecycle:
          postStart:
              exec:
                  command: ["bash", "-c", "sleep 5; /lx/LX-BIN/scripts/startAll.sh KVDS-1; \
                                           sleep 1; /lx/monitor/onlynode.sh; sleep 1; /lx/metrics/metricsExporter.sh"]
        livenessProbe:
          tcpSocket:
            port: 9992
          initialDelaySeconds: 120
          periodSeconds: 60
        volumeMounts:
        - name: local-pvc-ds
          mountPath: "/lx/LX-DATA/kivi_ds_data_dir_1"
  volumeClaimTemplates:
  - metadata:
      name: local-pvc-ds
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi
      storageClassName: local-storage

1.1.4. Query Engine StatefulSet

The memory used by the Query Engine process is defined by the QEMEM_FORCE=1 environment variable.

You should set the value to the amount of memory that you need.

Also the definition of the service to export, outside the kubernetes cluster the connection nodePort 31529.

apiVersion: v1
kind: Service
metadata:
  name: qe-service
  labels:
    app: qe
spec:
  type: NodePort
  selector:
    app: qe
  ports:
    # default nodePort range is 30_000-32_767
  - name: query
    protocol: TCP
    port: 1529
    targetPort: 1529
    nodePort: 31529
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: qe
spec:
  serviceName: "qe-service"
  replicas: 2
  selector:
    matchLabels:
      app: qe
  template:
    metadata:
      labels:
        app: qe
    spec:
      terminationGracePeriodSeconds: 10
      securityContext:
        fsGroup: 1000
      containers:
          - name: qe
            image: registry.leanxcale.com/leanxcale:k8s
            imagePullPolicy: Always
            ports:
            - containerPort: 1529
            - containerPort: 9101
            - containerPort: 9089
            env:
            - name: ZK_FORCE
              value: "$(META_SERVICE_SERVICE_HOST)"
            - name: KVMS_FORCE
              value: "$(META_SERVICE_SERVICE_HOST)"
            - name: QEMEM_FORCE
              value: "1"
            lifecycle:
              postStart:
                  exec:
                      command: ["bash", "-c", "sleep 5; /lx/LX-BIN/scripts/startAll.sh LgLTM; \
                                               sleep 5; /lx/LX-BIN/scripts/startAll.sh QE; \
                                               sleep 1; /lx/monitor/onlynode.sh; sleep 1; /lx/metrics/metricsExporter.sh"]
            livenessProbe:
              tcpSocket:
                port: 13422
                port: 1529
              initialDelaySeconds: 120
              periodSeconds: 60
            volumeMounts:
            - name: local-pvc-lgltm
              mountPath: "/lx/LX-DATA/tm_logger_cs_1/"

  volumeClaimTemplates:
  - metadata:
      name: local-pvc-lgltm
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi
      storageClassName: local-storage

1.2. Services

We two services enabling communication between the process inside the cluster, and to expose the main ports outside the cluster.

  • SVC META

  • SVC QE

You may need to modify the exposed port defined at SVC QE (31529 in our example) to meet the configuration of your Kubernetes deployment.

1.3. Scale UP

When you are going to scale up, you just only need to start one Meta replication controller.

But you can deploy as many KVDS, Query Engine and Conflict Manager instances as needed fulfil your requirements.

To scale up you just need to update the RC or the StatefulSet configuration:

kubectl scale rc cflm --replicas=8
kubectl scale statefulset kvds --replicas=16

1.4. Persisted Volumes

There are Persistent Volume Claim Templates pre-defined inside the StatefulSets definitions.

But the Kubernetes cluster administrator needs to define the Persistent Volumes based on the topology of the cluster.

Hence you need to configure Persistent Volumes, those volumes are configured through a Persistent Volume (PV), so that the the disks are local to the container.

1.4.1. Persistent Volume

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-ds-k8snode-1
spec:
  capacity:
    storage: 50Gi
  # volumeMode field requires BlockVolume Alpha feature gate to be enabled.
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: <path_at_kubernetes_cluster_node>
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - <kubernetes_cluster_node_name>

2. LeanXcale Monitoring

LeanXcale comes with system monitoring based on Prometheus and Grafana.

The monitoring is bundled in the Kubernetes configs, exposing the Grafana Dashboard exposed on the 30003 port.

3. Docker

For general LeanXcale docker information see Getting Started with LeanXcale in Docker.