Skip to main content

Deploying Aerospike Clusters in Kubernetes

Kubernetes is one of the most popular container orchestration platforms of today. It allows you to run your containerized applications across hardware and clouds, as well as providing a comprehensive orchestration tool for your containers.

Working with Aerospike on Kubernetes is quick and easy with Aerospike's Kubernetes manifests, which can be found at:

These repositories also contain helm charts, and different aerospike deployment configurations to get started.

These manifests and helm charts will allow you to implement a dynamically scalable Aerospike cluster using Kubernetes StatefulSets.

Usage

  1. Clone the github repository onto a machine with kubectl configured for your kubernetes cluster.
git clone https://github.com/aerospike/aerospike-kubernetes-enterprise.git
cd aerospike-kubernetes-enterprise
  1. Set your parameters. eg:
export APP_NAME=aerospike
export NAMESPACE=default
export AEROSPIKE_NODES=3
export AEROSPIKE_FEATURE_KEY_FILE=/etc/aerospike/features.conf
...

You can follow the below steps or run start.sh script available within the repository.

  1. Expand the manifest template
cat manifests/* | envsubst > expanded.yaml
  1. Create and apply the ConfigMap:
kubectl create configmap aerospike-conf -n $NAMESPACE --from-file configs/

Note: To apply feature-key-file, simply add the file to configs/ directory and create the ConfigMap.
If using mounted volumes to apply the feature-key-file, you can use AEROSPIKE_FEATURE_KEY_FILE to specify the file path within the container.

  1. Deploy:
kubectl create -f expanded.yaml

Detailed instructions are contained in the Github repo's README

Using Helm Charts

Aerospike Helm Charts are available on Helm Hub:

Steps

  1. Add Aerospike Helm repository,
helm repo add aerospike https://aerospike.github.io/aerospike-kubernetes-enterprise
  1. Install the chart,

    • You can set the configuration values defined here using --set option or provide a own custom values.yaml file during helm install.

      Note that the namespace related configurations (aerospikeNamespace, aerospikeNamespaceMemoryGB, aerospikeReplicationFactor and aerospikeDefaultTTL) in values.yaml file is intended for default single namespace configuration. If using multiple namespaces, these config items can be ignored and a separate aerospike.conf file or template can be used.

    • To apply your own aerospike configuration, you can set aerospikeConfFile to point to your own custom aerospike.conf file or template. Note that aerospikeConfFile should be a path on your machine where helm client is running.

    • Aerospike configuration file can also be passed in base64 encoded form. Use aerospikeConfFileBase64 configuration to specify base64 encoded string of the Aerospike configuration file.

      helm install aerospike-release aerospike/aerospike-enterprise \
      --set aerospikeConfFileBase64=$(base64 /tmp/aerospike_templates/aerospike.template.conf) \
      --set-file featureKeyFile=/secrets/aerospike/features.conf
    • To supply a feature-key-file during the deployment (EE only), use featureKeyFile to point to your features.conf license file during helm install. Note that featureKeyFile should be a path on your machine where helm client is running.

    • Aerospike license feature key file can also be passed in base64 encoded form. Use featureKeyFileBase64 configuration to specify base64 encoded string of the Aerospike feature key file.

      helm install aerospike-release aerospike/aerospike-enterprise \
      --set featureKeyFileBase64=$(base64 /secrets/aerospike/features.conf)
    • For storage configuration, you can configure multiple volume mounts (filesystem type) or device mounts (raw block device) or both in values.yaml. Please check the default values.yaml file for details on configuration.

    For enterprise edition,

    helm install as-release aerospike/aerospike-enterprise \
    --set dbReplicas=5 \
    --set-file featureKeyFile=/secrets/aerospike/features.conf \
    --set-file aerospikeConfFile=/tmp/aerospike_templates/aerospike.template.conf

    For community edition,

    helm install as-release aerospike/aerospike \
    --set dbReplicas=5 \
    --set-file aerospikeConfFile=/tmp/aerospike_templates/aerospike.template.conf
note

For Helm v2, release name can be specified using --name option, helm install --name as-release aerospike/aerospike-enterprise ...

note
  • With volumeMode: Filesystem, aerospike can throw the below warning in the logs when an info request is received for statistics that include storage device metrics.
    WARNING (hardware): (hardware.c:2296) failed to resolve mounted device /dev/sda: 2 (No such file or directory)
  • Aerospike's hardware module attempts to lookup the actual device for the filesystem in use to populate the metric for device age or lifetime. With volumeMode: Filesystem, the actual device which are attached to the Kubernetes hosts are not accessible at the path directly (e.g. /dev/sda) inside the container which causes the aerospike server to throw a warning with linux error code 2 (No such file or directory). This is expected to not have any performance or operational impact in K8s environment. The logging level of this message is removed as of version 4.9. For earlier versions, to avoid excessive logging with above WARNING message, the log level for the HARDWARE context can be set to CRITICAL.

Storage

Storage in Kubernetes is under constant development. With regards to databases, persistent storage is handled via StatefulSets and PersistentVolumes.

Dynamic provisioning for local devices are not supported yet. However, a local volume provisioner can be deployed to automate the provisioning of local devices.

An example Aerospike cluster deployment using local volume static provisioner can be found in the examples section.

Local Persistent Volumes

Local Persistent Volume is useful for utilizing the local SSD devices that can sometimes be found on popular cloud VM instances.

As a side benefit, the local SSDs provided are often times much faster than network based storage (eg: AWS EBS).

note
  • Dynamic provisioning for local volumes are not supported yet.
  • That means you cannot create local PVs through the StatefulSet's VolumeClaimTemplates, and must create each PersistentVolume and PersistentVolumeClaim manually.
note
  • Specifying nodeAffinity is required for local volumes.
note
  • Data stored on local SSDs are ephemeral.
  • A Pod that writes to a local SSD might lose access to the data stored on the disk if the Pod is rescheduled away from that node.

Examples for local Persistent Volumes usage with Aerospike can be found here

To read up more on local volumes, along with guides on how to provision them, refer to the Local Volume Static Provisioner git repo. Local volume static provisioner turned to beta in Kubernetes 1.12 and GA in 1.14.

Raw Block Volume

Raw block volume support - Raw block access is preferred by Aerospike Server for best performance characteristics (eg: latency stability, IOPS).

note

Raw Block Volumes can be used in conjunction with local volumes, AWS EBS, GCP PD, and Azure Disk volumes.

Examples for Raw Block Volumes usage with Aerospike can be found here