Skip to main content
Loading
Version: 2.2.0

Upgrade Operator 2.1.0 to 2.2.0 from OperatorHub.io

This is the procedure to upgrade the Operator, installed from OperatorHub.io, from version 2.1.0 to version 2.2.0.

Verify that 2.2.0 version of the operator is available

Verify that the following command lists 2.2.0 as the current CSV version:

kubectl get packagemanifests  -l catalog=operatorhubio-catalog,provider=Aerospike -o yaml | grep currentCSV:

The following appears:

    - currentCSV: aerospike-kubernetes-operator.v2.2.0

Upgrade the operator

Based on the installPlanApproval mode, the upgrade of the Operator is either:

  • Automatic: The default when installing the Operator from OperatorHub.io.
  • Manual: If the OperatorHub.io subscription has been edited to use Manual approval.

Automatic

The standard install procedure sets up Automatic upgrade approval for the operator. In this case OLM automatically installs 2.2.0 version of the operator. You do not need to perform any manual steps to upgrade the operator.

You can skip ahead to verification.

Manual

If your OperatorHub.io subscription is set for Manual approval, follow these steps to manually approve the upgrade.

Verify that the InstallPlan for version 2.2.0 exists:

kubectl get installplan -n operators | grep aerospike

Sample output with an InstallPlan for version 2.2.0:

NAME            CSV                                    APPROVAL   APPROVED
install-2tg7p aerospike-kubernetes-operator.v2.2.0 Manual false
install-fn297 aerospike-kubernetes-operator.v2.1.0 Manual true

In this example, the upgrade is not applied, since the approved status is false.

To approve the upgrade, set the approved field in the InstallPlan to true using the following:

kubectl patch installplan -n operators --type merge --patch '{"spec":{"approved":true}}'  $(kubectl get installplan -n operators | grep "aerospike-kubernetes-operator.v2.2.0" | cut -f 1 -d " ")

Verify that the Operator is upgraded

Run the following command:

kubectl get csv -n operators | grep aerospike

The operator upgrade can take some time. The CSV for version 2.2.0 goes through the phases Pending, Installing, InstallReady and ends on Succeeded.

Sample output on success:

NAME                                   DISPLAY                         VERSION   REPLACES                               PHASE
aerospike-kubernetes-operator.v2.2.0 Aerospike Kubernetes Operator 2.2.0 aerospike-kubernetes-operator.v2.1.0 Succeeded

Check Operator Logs

The Operator runs as two replicas by default, for higher availability. Run the following command to follow the logs for the Operator pods:

kubectl -n operators logs -f deployment/aerospike-operator-controller-manager manager

Sample output:

2022-06-16T19:09:58.058Z    INFO    controller-runtime.metrics  metrics server is starting to listen    {"addr": "127.0.0.1:8080"}
2022-06-16T19:09:58.062Z INFO setup Init aerospike-server config schemas

2022-06-16T19:09:58.071Z DEBUG schema-map Config schema added {"version": "4.7.0"}
2022-06-16T19:09:58.072Z INFO aerospikecluster-resource Registering mutating webhook to the webhook server
2022-06-16T19:09:58.073Z INFO controller-runtime.webhook registering webhook {"path": "/mutate-asdb-aerospike-com-v1beta1-aerospikecluster"}
2022-06-16T19:09:58.073Z INFO controller-runtime.builder skip registering a mutating webhook, admission.Defaulter interface is not implemented {"GVK": "asdb.aerospike.com/v1beta1, Kind=AerospikeCluster"}
2022-06-16T19:09:58.073Z INFO controller-runtime.builder Registering a validating webhook {"GVK": "asdb.aerospike.com/v1beta1, Kind=AerospikeCluster", "path": "/validate-asdb-aerospike-com-v1beta1-aerospikecluster"}
2022-06-16T19:09:58.073Z INFO controller-runtime.webhook registering webhook {"path": "/validate-asdb-aerospike-com-v1beta1-aerospikecluster"}
2022-06-16T19:09:58.074Z INFO setup Starting manager
I1015 19:09:58.074722 1 leaderelection.go:243] attempting to acquire leader lease aerospike/96242fdf.aerospike.com...

Grant RBAC permissions to non-aerospike Kubernetes namespaces

caution

There is a known issue in OLM based installations (OperatorHub.io and on Red Hat OpenShift) where upgrade to version 2.2.0 from 2.1.0, causes revoking of RBAC privileges required to run Aerospike clusters in Kubernetes namespaces other than the aerospike namespace.

If you are running Aerospike clusters in Kubernetes namespaces other than the aerospike namespace, re-grant the RBAC privileges following instructions here.