Skip to main content

Configuration Reference

This page lists the configuration parameters that can be specified in the Aerospike configuration file /etc/aerospike/aerospike.conf.

Manipulating Runtime Configuration

info

For Access-Control-enabled clusters, authentication is required in order to get the server configuration details as well as the sys-admin permission in order to dynamically change configuration parameters.

Viewing configuration settings

To view the configuration values from a running system:

asinfo -v 'get-config:'

To view the configuration value for a specific context:

asinfo -v 'get-config:context=someContextName'

To view namespace-specific configuration:

asinfo -v 'get-config:context=namespace;id=someNameSpaceName'

Dynamic configuration

note

Tools package 6.0.x or later is required to use asadm's manage config commands.

To change the dynamic configuration values on a running system use asadm or asinfo:

asadm -e 'enable; manage config someContextName someOptionalID param someParameterName to valueForThatParameter'

asadm -e 'enable; manage config someContextName nameOfaSubcontext param someParameterName to valueForThatParameter'

or

asinfo -v 'set-config:context=someContextName;id=someOptionalID;someParameterName=valueForThatParameter'

asinfo -v 'set-config:context=someContextName;nameOfaSubcontext.someParameterName=someValueForThatParameter'

where:

  • set-config: Command used to change any dynamically configurable parameter.
  • context: The component being updated. Allowed values:
    • logging
    • namespace
    • security
    • service
    • network
    • xdr
  • nameOfaSubcontext: The sub-component being updated. Allowed values:
    • heartbeat
    • fabric
    • file
    • set
    • For XDR: datacenter and namespace.
    • The nameOfaSubcontext field is not required for subcontext storage-engine.
  • id: This is required only if updating namespace-specific configuration values. Note: this id is not used with XDR namespace syntax. See "XDR syntax" below.
  • someParameterName: This is the configuration name that is being updated.

To set a parameter in the set subcontext:

asadm -e 'enable; manage config namespace someNameSpaceName set someSetName param someParameterName to valueForThatParameter'

or

asinfo -v 'set-config:context=namespace;id=someNameSpaceName;set=someSetName;someParameterName=someValueForThatParameter'

Other examples for service, namespace, and network contexts:

asadm -e 'enable; manage config service param proto-fd-max to 100000'

asadm -e 'enable; manage config namespace test param defrag-sleep to 500'

asadm -e 'enable; manage config network heartbeat param protocol to v3'

XDR syntax

To view highest-level XDR-specific configuration values:

asinfo -v 'get-config:context=xdr'

To create a new datacenter:

asadm -e 'enable; manage config xdr create dc someDataCenterName'

or

asinfo -v 'set-config:context=xdr;dc=someDataCenterName;action=create'

To delete a datacenter:

asadm -e 'enable; manage config xdr delete dc someDataCenterName'

or

asinfo -v 'set-config:context=xdr;dc=someDataCenterName;action=delete'

To add a new XDR namespace:

asadm -e 'enable; manage config xdr dc someDataCenterName add namespace someNameSpaceName'

or

asinfo -v 'set-config:context=xdr;dc=someDataCenterName;namespace=someNameSpaceName;action=add'

To remove an XDR namespace:

asadm -e 'enable; manage config xdr dc someDataCenterName remove namespace someNameSpaceName'

or

asinfo -v 'set-config:context=xdr;dc=someDataCenterName;namespace=someNameSpaceName;action=remove'

To set a specific parameter for an XDR namespace:

asadm -e 'enable; manage config xdr dc someDataCenterName namespace someNameSpaceName param SomeParameterName to someValueForThatParameter'

or

asinfo -v 'set-config:context=xdr;dc=someDataCenterName;namespace=someNameSpaceName;SomeParameterName=someValueForThatParameter'

To view the configuration values for a specific XDR datacenter:

asadm -e 'show config xdr for someDataCenterName'

or

asinfo -v 'get-config:context=xdr;dc=someDataCenterName'

To view the configuration values for a specific XDR namespace:

asinfo -v 'get-config:context=xdr;dc=someDataCenterName;namespace=someNameSpaceName'

Configuration Parameter Classification

[enterprise]
Configuration parameters that are only valid on Aerospike Enterprise.

[dynamic]
Configuration parameters that may be changed at runtime.

[static]
Configuration parameters that can only be set when starting the node.

[required]
Configuration parameters required for Aerospike to start.

[unanimous]
Configuration parameters that must be the same across the cluster.

Search configuration parameters

156 removed parameters

cluster

mode

[unanimous] [static]
Context:

cluster

Removed:

3.13.0.1 (post cluster protocol change)

Identifies if node-ids need to be statically configured or dynamically obtained from the local IP address. This is removed in 3.13.0.1 post clustering protocol switch, as the cluster context is removed. Refer to node-id for the ability to specify a node's ID in version 3.16.0.1 and higher.

Additional information

Options:

  • static: node-id must be statically assigned.

  • dynamic: node-id are dynamically chosen based on the local IP address.

  • none: Do not use rack aware.

note

The cluster context requires paxos-protocol v4.

self-group-id

[static]
Context:

cluster

Removed:

3.13.0.1 (post cluster protocol change)

Removed in 3.13.0.1 post clustering protocol switch. Replaced with rack-id at the namespace level. Identifies a collection of nodes. Nodes with the same group-id will not share replicas.

Additional information

The group-id may be configured as any 16 unsigned integer.

self-node-id

[static]
Context:

cluster

Removed:

3.13.0.1 (post cluster protocol change)

Identifies an individual node, must be unique within a group-id. This is removed in 3.13.0.1 post clustering protocol switch, as the cluster context is removed. Refer to node-id for the ability to specify a node's ID in version 3.16.0.1 and higher. Alternatively, refer to node-id-interface to specify the interface to be used for the node id generation.

Additional information

If mode is configured to dynamic the node-id will be based on the IP address of the local node.

The node-id may be configured as any 32 bit unsigned integer.

caution

The configuration file options node-id and node-id-interface are mutually exclusive.

logging

context

[dynamic]
Context:

logging

Subcontext:

file

Default:

any critical

Specifies the context and level of logging to be logged. You can use a combination of contexts and logging levels. For details on changing log level, see Changing Log Levels.

Additional information

Different contexts and their logging levels can be obtained from the command.

asinfo -v log/0

Sample output:

misc:CRITICAL;alloc:CRITICAL;arenax:CRITICAL;hardware:CRITICAL;jem:CRITICAL;msg:CRITICAL;...

Prior to Aerospike Server version 4.9, the default severity level was INFO:

misc:INFO;alloc:INFO;arenax:INFO;hardware:INFO;jem:INFO;msg:INFO;...

Supports the following logging levels:

  • context any info
  • context any debug
  • context any warning
  • context any critical
  • context any detail
note

For common log messages details and full list of contexts, see Server Log Messages Reference Manual.

file

[static]
Context:

logging

Default:

/var/log/aerospike/aerospike.log

Specifies the path of the server log file for logging. Not to be confused with file from the namespace context. You can have multiple files for various contexts.

Additional information

Example:

logging {
file /var/log/aerospike/aerospike.log {
context any info
}

file /var/log/aerospike/aerospike_debug.log {
context any debug
}
}

Context specifies the context and level of logging to be logged. You can use a combination of contexts and logging levels. This configuration can be used either with file or console type of logging. Different contexts and their logging levels can be obtained from the command.

asinfo -v log/0

sample output:

misc:INFO;alloc:INFO;arenax:INFO;hardware:INFO;jem:INFO;msg:INFO;...

supports the following logging levels:

  • context any info
  • context any debug
  • context any warning
  • context any critical
  • context any detail
note

For common log messages details and full list of contexts, refer to the Server Log Messages Reference Manual.

mod-lua

cache-enabled

[static]
Context:

mod-lua

Default:

true

Whether to enable caching of Lua states for each registered Lua module, to benefit performance.

Additional information
note

With the cache enabled, 10 Lua states are initially cached for each Lua module on every node, and the cache expands as needed at runtime up to a maximum of 128 entries per module.

system-path

[static]
Context:

mod-lua

Default:

/opt/aerospike/sys/udf/lua

Removed:

4.3.1

Directory to be used by the Aerospike process to store default UDF files. Removed as of version 4.3.1 where the process is simplified by storing this code in C strings in the mod-lua module and loading from them directly. This eliminates any client/server dependency on the lua-core module. The directories under /udf/lua/external are no longer part of an installation as of version 4.3.1. After upgrading, such lingering directories can be removed for clarify.

Additional information
note

If this directory is user specified, the Aerospike process must have read/write permission on that directory.

user-path

[static]
Context:

mod-lua

Default:

/opt/aerospike/usr/udf/lua

Directory to be used by the Aerospike process to store user generated UDF files.

Additional information
note

If this directory is user specified, the Aerospike process must have read/write permission on that directory.

namespace

allow-nonxdr-writes

[dynamic]
Context:

namespace

Default:

true

Introduced:

3.5.12

Removed:

5.0.0

In Aerospike 5.0, this parameter was replaced by reject-non-xdr-writes. Parameter to control the writes done by a non-XDR client. Setting it to false will disallow all the writes from a non-XDR client (any regular client library). This parameter is useful to control accidental writes by a non-XDR client to a namespace when it is not expected, and can be used for namespaces taking writes exclusively from XDR client(s). When set to false, error code 10 will be returned and will tick the fail_xdr_forbidden statistic.

Additional information

Example: Set allow-nonxdr-writes to false:

asinfo -v "set-config:context=namespace;id=namespaceName;allow-nonxdr-writes=false"
ok
note

For versions prior to 3.8 (xdr as a separate process from asd), to dynamically change you must target ASD's service port, not XDR.

allow-ttl-without-nsup

[dynamic]
Context:

namespace

Default:

false

Introduced:

4.9

Aerospike strongly recommends that you do not change this setting. See the Warning in "Additional Information" below.

If data expiration and eviction are disabled (nsup-period set to 0, the default), setting allow-ttl-without-nsup to true allows writes of records with a non-zero TTL (which would otherwise will not be allowed).

Additional information

Example: Set allow-ttl-without-nsup to true:

asinfo -v "set-config:context=namespace;id=namespaceName;allow-ttl-without-nsup=true"
ok
note

For additional discussion, see Namespace Data Retention Configuration.

caution

Aerospike strongly recommends that you not change this setting.

The server will not start if nsup-period is 0 (the default) but default-ttl is non-zero, unless if this setting is set to true.

allow-xdr-writes

[dynamic]
Context:

namespace

Default:

true

Introduced:

3.5.12

Removed:

5.0.0

In Aerospike 5.0, this parameter was replaced by reject-xdr-writes. Parameter to control whether to accept write transactions originating from an XDR client. Setting it to false will disallow all the writes from an XDR client (at a destination cluster) and will only allow non XDR clients to write transactions. This parameter is useful to control accidental writes by an XDR client. When set to false, error code 10 will be returned, disallowed writes will not be relogged by XDR and will tick the fail_xdr_forbidden statistic on the remote (destination) cluster.

Additional information

Example: Set allow-xdr-writes to true:

asinfo -v "set-config:context=namespace;id=namespaceName;allow-xdr-writes=true"
ok
note

For versions prior to 3.8 (xdr as a separate process from asd), to dynamically change you must target ASD's service port, not XDR.

background-query-max-rps

[dynamic]
Context:

namespace

Default:

10000

Introduced:

6.0.0

Maximum records per second (rps) allowed for a background query (i.e. UDF or ops query). If necessary, the query will be throttled so as to not exceed this rps value. Value range: 1-1000000. If the query must read the records from device to do any filtering (bin level filters), or if it reads them from device with no filtering, the throttle will be applied to the rate at which records are read. If the records are stored in memory, or can be filtered based on index metadata, the throttle will be applied to the rate at which the records are returned to the client.

Additional information

Example: Set background-query-max-rps to 6000:

asinfo -v "set-config:context=namespace;id=namespaceName;background-query-max-rps=6000"
ok
note

As the name suggests, this throttling applies only to background or UDF queries. For throttling of basic query specific client policy settings should be used. These are described in the applicable Client API doc under Query Policy.

background-scan-max-rps

[dynamic]
Context:

namespace

Default:

10000

Introduced:

4.7.0

Removed:

6.0.0

Maximum records per second (rps) allowed for a background scan (i.e. UDF or ops scan). If necessary, the scan will be throttled so as to not exceed this rps value. Value range: 1-1000000. If the scan must read the records from device to do any filtering (bin level filters), or if it reads them from device with no filtering, the throttle will be applied to the rate at which records are read. If the records are stored in memory, or can be filtered based on index metadata, the throttle will be applied to the rate at which the records are returned to the client.

Additional information

Example: Set background-scan-max-rps to 6000:

asinfo -v "set-config:context=namespace;id=namespaceName;background-scan-max-rps=6000"
ok

This parameter was renamed to “background-query-max-rps” in server 6.0.0

note

As the name suggests, this throttling applies only to background or UDF scans. For throttling of basic scan specific client policy settings should be used. These are described in the applicable Client API doc under Scan Policy.

cache-replica-writes

[dynamic]
Context:

namespace

Subcontext:

storage-engine device

Default:

false

Introduced:

4.8.0

Controls whether replica writes are placed into the post-write queue. Setting this true could improve performance in certain situations. It cannot be set true for data-in-memory namespaces.

Additional information
tip

It is recommended to set this true when using client rack-aware, or when using random read mode with replication-factor all.

cold-start-empty

[static]
Context:

namespace

Subcontext:

storage-engine device

Default:

false

Introduced:

3.3.21

Setting this to true will cause cold start to ignore existing data on drives and start as if empty. Does not affect fast restart.

Additional information
tip

May be used to avoid deleted objects reappearing upon cold start. After restart, migrates will replicate data back to this node.

caution

Before cold-starting another node, make sure migrations have completed to avoid any data loss.

cold-start-evict-ttl

[static]
Context:

namespace

Default:

4294967295

Removed:

4.5.1

This sets the TTL below which records will be evicted (will not be loaded) during coldstart. This is often used to speed up coldstart when the eviction depth is deep. Default value represents -1.

commit-min-size

[enterprise][static]
Context:

namespace

Subcontext:

storage-engine device

Default:

0

Introduced:

4.0

Minimum size, in bytes, of a disk flush when commit-to-device is enabled. Has to be a power of 2. Can be set as 4k. Default of 0 will auto-detect the smallest size possible for the device. It is usually recommended to keep the default for this configuration.

commit-to-device

[enterprise][static]
Context:

namespace

Subcontext:

storage-engine device or pmem

Default:

false

Introduced:

4.8.0 (pmem)

Wait for write to flush to disk or pmem before acknowledging the client. Only available for strong-consistency enabled namespaces. If using storage-engine device file storage with commit-to-device set true, it may be useful to set read-page-cache true.

Additional information
note

In case of a crash, when running with commit-to-device set to true, all partitions will be trusted upon the subsequent cold start.

When using shadow devices, this setting will commit to both primary and shadow prior to returning to the client and will therefore likely slow transaction latencies even further.

Having more physical or logical devices can help avoid potential bottlenecks caused by the serialization on the write buffer.

compression

[enterprise][dynamic]
Context:

namespace

Subcontext:

storage-engine device or pmem

Default:

none

Introduced:

4.5.0 (device) 4.8.0 (pmem)

Options: none, lz4, snappy, zstd.

Use of compression requires a feature to be enabled in the feature-key-file, and specifies the algorithm used to compress records on SSD or pmem storage files. For zstd the compression-level can be specified.

As of version 4.5.3.2, the flat storage format is also used as wire format for replication, migration and duplicate resolution, providing potential significant network bandwidth and CPU usage when using compression.

Additional information

Example: Set the namespace's compression algorithm to zstd:

asinfo -v 'set-config:context=namespace;id=namespaceName;compression=zstd'
ok
note

Note that compression does not allow to write records larger than the configured write block size (which is fixed at 8 MB for pmem), even if their compressed sizes would be smaller than the write block size. Compression happens at the storage and fabric layer. Using different compression options on different nodes for benchmarking purposes is supported.

caution

For Aerospike versions before Aerospike 4.9, do not dynamically set compression for storage-engine memory. This can possibly corrupt memory and cause the server to crash.

compression-level

[enterprise][dynamic]
Context:

namespace

Subcontext:

storage-engine device or pmem

Default:

0

Introduced:

4.5.0 (device) 4.8.0 (pmem)

Note: this is compression-level for storage-engine, not XDR compression-level for dc namespace. Scroll down to see that parameter.

Allowable range: 1-9

The compression level to use with zstd compression. Controls the trade-off between compression speed and compression ratio. A higher level value, for example 9, means more efficient but slower compression. A lower level value, for example 1, means less efficient but faster compression. Note that this item should only be specified when using compression zstd.

In Aerospike Server versions prior to 4.6.x, if this setting has never been specified when using compression zstd, a default flag of 0 is displayed and the compression-level of 9 will be used.

In Aerospike Server versions 4.6.x or newer, if this setting has never been specified when using compression zstd, a default flag of 9 is displayed and the compression-level of 9 will be used.

The compression configuration directives belong to a namespace's storage-engine section.

Additional information

Example: Set the namespace's compression-level to 1:

asinfo -v 'set-config:context=namespace;id=namespaceName;compression-level=1'
ok

conflict-resolution-policy

[dynamic]
Context:

namespace

Default:

generation

This setting can be set to either last-update-time or generation
generation - Resolve records conflict based on the record's generation number.
last-update-time - Resolve records conflict based on the record's last update time (version 3.8.3 and up).
ttl - Resolve records conflict based on the record's ttl. (obsolete as of version 3.8.3).

This parameter does not impact the cold restart conflict resolution policy. For version 3.8.3 and above, cold restart conflict resolution always uses the last-update-time. For records created prior to 3.8.3, the cold start resolution will fall back to generation. Also, in case of equal last-update-time, the tie is broken by generation.

Additional information

Generation value could wrap back to 0 on a record with a high update rate (Max of 65K generation number per Records). In AP mode (strong-consistency set to false) network partitions could cause updates to be lost when the cluster re-forms itself. For use cases where it is more important to preserve the history of a record (such as lists or maps with items appended on each update) generation may be more suited whereas for use cases where the last update is more important to preserve, last-update-time would be more suited.

Example: Set conflict-resolution-policy to last-update-time:

asinfo -v "set-config:context=namespace;id=namespaceName;conflict-resolution-policy=last-update-time"
ok
note

Not configurable when strong-consistency is enabled (neither generation nor only last-update-time is in such case but a combination of last-update-time and regime).

conflict-resolve-writes

[enterprise][dynamic]
Context:

namespace

Default:

false

Introduced:

5.4.0

This config is necessary for the XDR bin convergence feature. If this is turned on, bin-level last-update-time will be stored and will be used to determine the winner. If this is off, the bin-level last-update-time will be discarded and the latest write cannot be determined. This config cannot be turned on if single-bin is turned on for the namespace. Refer to the bin convergence feature documentation.

Additional information

Example: Set conflict-resolve-writes dynamically:

asinfo -v "set-config:context=namespace;id=namespaceName;conflict-resolve-writes=true"
ok

data-in-index

[unanimous] [static]
Context:

namespace

Default:

false

Optimization in single bin case, will only allow integer or float stored in index space. Can only be used when storage-engine is device and single-bin is true.

Additional information
note

Allows fast restart for single bin, data in memory, integer or float data only pattern. For single-bin namespaces not configured with data-in-index, integer or float data will also be stored in the index but will not allow fast restart when data-in-memory is set to true.

data-in-memory

[static]
Context:

namespace

Subcontext:

storage-engine device

Default:

false

Keep a copy of all data in memory always.

default-ttl

[dynamic]
Context:

namespace

Default:

0

Default time-to-live (in seconds) for a record from the time of creation or last update. The record will expire in the system beyond this time. This is not allowed to exceed the max-ttl value as of version 3.8.3.
As of version 4.5.1, max-ttl no longer exists, but an upper limit of ten years (3650D) on the default time-to-live still applies.

Additional information

Supports the following suffixes:

  • S Second

  • M Minute

  • H Hour

  • D Day

Example:

default-ttl 60D

Set default-ttl to 30 days dynamically:

asinfo -v "set-config:context=namespace;id=namespaceName;default-ttl=30D"
ok
note

Can be overridden via API. 0 means lives forever.

caution

Reducing an existing record's TTL (or issuing a non durable delete) may cause older versions of the records to be resurrected upon cold restarts. For more details, see Issues with cold-start resurrecting deleted records.

As of version 4.9, the server will not start if default-ttl is non-zero but nsup-period is 0 (the default), unless allow-ttl-without-nsup is set true.

The same restriction is also enforced when setting default-ttl dynamically, as of versions 4.9.0.12, 5.0.0.13, and 5.1.0.10.

defrag-lwm-pct

[dynamic]
Context:

namespace

Subcontext:

storage-engine device or pmem

Default:

50

Blocks that are less filled in percentage than the specified limit will be marked as eligible to be defragmented.

Additional information

Example: Set defrag-lwm-pct to 55:

asinfo -v "set-config:context=namespace;id=namespaceName;defrag-lwm-pct=55"
ok
note

A higher percentage means more blocks to be defragmented and denser data on the disk.

Do not set the value to 100% or higher as it would put the system in an endless busy loop.

defrag-max-blocks

[dynamic]
Context:

namespace

Subcontext:

storage-engine device

Default:

4000

Removed:

3.3.17

Defragment at most specified number of disk blocks in each run.

defrag-period

[dynamic]
Context:

namespace

Subcontext:

storage-engine device

Default:

1

Removed:

3.3.17

Interval, in seconds, at which the defrag will scan all blocks and mark the ones eligible to be defragmented.

Additional information
note

This can be set to 0 in extreme situations but the disk i/o and the impact on latencies should be carefully monitored. See defrag-lwm-pct regarding criteria for blocks to be eligible to be defragmented.

defrag-queue-min

[dynamic]
Context:

namespace

Subcontext:

storage-engine device or pmem

Default:

0

Introduced:

3.4.0

Don't defrag unless the queue has this many eligible wblocks.

Additional information

Example: Set defrag-queue-min to 10:

asinfo -v "set-config:context=namespace;id=namespaceName;defrag-queue-min=10"
ok
tip

This may reduce write amplification for use cases with infrequent record overwrites or periodic record purges by allowing write blocks to linger on the queue longer and potentially be nearly empty when processed.

defrag-sleep

[dynamic]
Context:

namespace

Subcontext:

storage-engine device or pmem

Default:

1000

Introduced:

3.3.17

Number of microseconds to sleep after each wblock defragged.

Additional information

Example: Set defrag-sleep to 500:

asinfo -v "set-config:context=namespace;id=namespaceName;defrag-sleep=500"
ok
note

A secondary usage of defrag-sleep is to define the interval at which the write queue is checked when defragmentation is throttled due to write queue overflow. Details on this can be found in this KB article.

defrag-startup-minimum

[static]
Context:

namespace

Subcontext:

storage-engine device or pmem

Default:

0

Server needs at least specified amount (in percentage) of free space at startup.

The value must be an integer and the allowable range is 0 to 99.

In server versions prior to 5.7, the default value is 10 and the allowable range is 1 to 99.

device

[static]
Context:

namespace

Subcontext:

storage-engine device

Raw device used to store the namespace.

Additional information

Example: Persist to two devices

device /dev/sdb
device /dev/sdc

Persist to device and shadow device

device /dev/nvme0n1 /dev/sdb
note

As of 4.3.0.2, when requesting the configuration via the 'info' API, the key for a particular device will be storage-engine.device[ix] where 'ix' is an index to identify this device with its associated statistics (such as the statistic storage-engine.device[ix].age).

If configured, the device's shadow device will appear as storage-engine.device[ix].shadow.

tip

You can specify multiple devices per namespace.

caution

There is a maximum limit of 2 TiB on each device size.
You may not use both device and file in the same namespace.
There is a limit of 128 devices per namespace as of version 4.2 (64 for versions down to 3.12.1 and 32 in previous versions).

direct-files

[static]
Context:

namespace

Subcontext:

storage-engine device or pmem

Default:

false

Introduced:

4.3.1 (device) 4.8.0 (pmem)

Relevant only for file storage. If using storage-engine pmem, relevant only for shadow file storage. If direct-files is set true, then the odirect and odsync flags are enabled for file IO. This means write-buffers are synchronously written all the way through to the devices under the file system. If using storage-engine device with data-in-memory set false, then it may be useful to set read-page-cache true. Refer to the Buffering and Caching in Aerospike article for further details.

Additional information
caution

Can impact performance, especially if files are backed by rotational devices.

disable-cold-start-eviction

[static]
Context:

namespace

Default:

false

Introduced:

4.3.0.2

If true, disables eviction that may occur at cold start for this namespace only.

disable-eviction

[unanimous] [dynamic]
Context:

namespace

Subcontext:

set

Default:

false

Introduced:

5.6

Setting it to true will protect the set from evictions. Setting this parameter does not affect the TTL of records within the set. Records can have a TTL and will expire as normal.

This parameter was renamed from set-disable-eviction in version 5.6.

Additional information

Example: Set disable-eviction on the set dynamically:

asinfo -v "set-config:context=namespace;id=namespaceName;set=setName;disable-eviction=true"
ok

Set disable-eviction under the namespace definition in aerospike.conf:

  set set1 {
disable-eviction true
}
set set2 {
disable-eviction true
}
set test {
disable-eviction true
}
note

Eviction may well happen at startup and, as such, it is good practice to enter protected sets into aerospike.conf as shown above to prevent a protected set being evicted during cold start.

disable-nsup

[dynamic]
Context:

namespace

Default:

false

Introduced:

4.3.0.2

Removed:

4.5.1

Removed as of version 4.5.1. Set nsup-period to 0 to disable nsup.
If true, disables NSUP primary index reductions for this namespace only. Every nsup-period interval when disable-nsup is true will appear similar to the example.

Additional information

Example: Jul 19 2018 17:16:17.936 GMT-0700: INFO (nsup): (thr_nsup.c:892) (test) nsup-skipped

disable-odirect

[static]
Context:

namespace

Subcontext:

storage-engine device

Default:

false

Removed:

4.3.1

If true, disables the odirect flag when reading or writing to raw devices. This allows the OS to leverage page cache and can help with latencies for some workload types. Should be tested or deployed on a single node prior to full production roll out. Refer to the Buffering and Caching in Aerospike article for further details.

Additional information
note

Performant storage sub-systems running on older kernels may be adversely impacted by this setting as checking the page cache prior to accessing the storage sub-system may be penalizing.
Workload with higher cache_read_pct may be considered, but should also check the impact of increasing the post-write-queue configuration parameter. Less performant storage sub-systems (network attached for example) may greatly benefit from disabling the odirect flag.

tip

For similar functionality refer to read-page-cache.

disable-odsync

[static]
Context:

namespace

Subcontext:

storage-engine device or pmem

Default:

false

Introduced:

4.5.0.12, 4.5.1.8, 4.5.2.3, 4.5.3.3

If disable-odsync is set true, then the Linux O_DSYNC I/O flag is set false (even if, for files, direct-files is set true). Disabling O_DSYNC would likely improve performance at a cost of relaxed durability guarantees. Refer to the Buffering and Caching in Aerospike article for further details.

Note: disable-odsync and commit-to-device cannot be both set to true. Setting both to true will prevent the server from starting given their opposition in the durability/performance trade off.

Additional information
note

With data in PMEM, this setting is only relevant for shadow file storage.

Some further details on the effect of this setting: When a database record is written or updated, the changed record initially resides in a memory buffer on a server node in a structure known as the current write block. Write blocks are regularly flushed to SSD via Linux pwrite(2) syscalls, with the interval bounded by the flush-max-ms configuration parameter. Until the a write block is persisted to SSD (by default at most 1 second after being written to DRAM), its contents are subject to loss in the event of a system failure (e.g. power outage). The default behavior (O_DSYNC enabled) is that pwrite(2) will block the calling thread until the data has been written to the SSD. That delay reduces the work per unit of time a thread can do, potentially incurring a performance penalty. When O_DSYNC is disabled a thread calling pwrite(2) will return immediately, enabling that thread to do other work. However the data may not be transferred to the device until some time in the future. If there is a system failure during the interval between calling pwrite(2) and when the data is completely written to SSD, there will be data loss (on that specific node only). Whether trading off durability against performance is worthwhile depends on the application, the Linux I/O implementation (which affects how quickly data is transferred), and the sensitivity of the record data. For some data (e.g. a frequently-updated sensor reading) the risk may be acceptable, for others (e.g. a financial transaction) it may not. A full description of Aerospike caching may be found on this buffering and caching knowledge base article.

If you are utilizing the rack aware functionality for your cluster, the only way that we would expect a potential data loss is with Replication Factor (RF) number of servers failing within the SAME short duration, one second (at default) + virtual/cloud delay (if disable-odsync is turned on), as described above, one per rack, across the RF number of racks that store the copies of the record. So for example, in an RF=2 configuration with the servers split among two racks, for a potential loss of data to occur, a single server would have to fail in EACH rack within that very short duration.

disable-write-dup-res

[dynamic]
Context:

namespace

Default:

false

Introduced:

3.15.1.3

Disables write duplicate resolution for the namespace. Only applicable for AP namespaces (non strong-consistency enabled). Write duplicate resolution is needed when recovering from node maintenance/failure or a partition. In such situations, a node will chase different versions of a record prior to applying the update. This only applies during migrations when multiple versions of a given partition may exist.

Additional information
tip

Setting to true will disable write duplicate resolution which can improve write performance during migrations but may also result in lost updates.

disallow-null-setname

[dynamic]
Context:

namespace

Enabling this configuration will error out a record write attempt if done without a set name.

Additional information

By default, Aerospike allows writes with and without a set name. If a record is sent without a setname, it gets assigned a 'null' set. If this configuration is enabled, any record without a setname will not be allowed to be written to the namespace. An 'Error Code 4 AEROSPIKE_ERR_REQUEST_INVALID' will be sent back to the client. Additionally, a warning will be logged to the server with the message null/empty set name not allowed for namespace.

Note: Ensure that the configuration is set uniformly on all nodes. If that is not done, it would lead to situations when one node would allow such null-set records and others would not.

Example: Dynamically enabling this configuration:

asinfo -v "set-config:context=namespace;id=namespaceName;disallow-null-setname=true"

earth-radius-meters

[static]
Context:

namespace

Subcontext:

geo2dsphere-within

Default:

6371000

Introduced:

3.7.0.1

Earth's radius in meters, since the workspace here is the complete earth.

enable-benchmarks-batch-sub

[dynamic]
Context:

namespace

Default:

false

Introduced:

3.9

Enable histograms for batch sub transactions. Refer to the Histograms from Aerospike Logs page for details.

Additional information

Here is the list of configuration enabled histograms:

Example: Set enable-benchmarks-batch-sub to true:

asinfo -v 'set-config:context=namespace;id=<namespaceName>;enable-benchmarks-batch-sub=true'
ok

enable-benchmarks-ops-sub

[dynamic]
Context:

namespace

Default:

false

Introduced:

4.7

Enable histograms for ops sub transactions. Refer to the Histograms from Aerospike Logs page for details.

Additional information

Here is the list of configuration enabled histograms:

Example: Set enable-benchmarks-ops-sub to true:

asinfo -v 'set-config:context=namespace;id=<namespaceName>;enable-benchmarks-ops-sub=true'
ok

enable-benchmarks-read

[dynamic]
Context:

namespace

Default:

false

Introduced:

3.9

Enable histograms for read transactions. Refer to the Histograms from Aerospike Logs page for details.

Additional information

Here is the list of configuration enabled histograms:

Example: Set enable-benchmarks-read to true:

asinfo -v 'set-config:context=namespace;id=<namespaceName>;enable-benchmarks-read=true'
ok

enable-benchmarks-storage

[dynamic]
Context:

namespace

Subcontext:

storage-engine device or pmem

Default:

false

Introduced:

3.9.0 (device) 4.8.0 (pmem)

Enable histograms for storage access. Refer to the Histograms from Aerospike Logs page for details.

Additional information

Here is the list of configuration enabled histograms:

Example: Set enable-benchmarks-storage to true:

asinfo -v 'set-config:context=namespace;id=<namespaceName>;enable-benchmarks-storage=true'
ok

enable-benchmarks-udf

[dynamic]
Context:

namespace

Default:

false

Introduced:

3.9

Enable histograms for udf transactions. Refer to the Histograms from Aerospike Logs page for details.

Additional information

Here is the list of configuration enabled histograms:

Example: Set enable-benchmarks-udf to true:

asinfo -v 'set-config:context=namespace;id=<namespaceName>;enable-benchmarks-udf=true'
ok

enable-benchmarks-udf-sub

[dynamic]
Context:

namespace

Default:

false

Introduced:

3.9

Enable histograms for udf sub transactions. Refer to the Histograms from Aerospike Logs page for details.

Additional information

Here is the list of configuration enabled histograms:

Example: Set enable-benchmarks-udf-sub to true:

asinfo -v 'set-config:context=namespace;id=<namespaceName>;enable-benchmarks-udf-sub=true'
ok

enable-benchmarks-write

[dynamic]
Context:

namespace

Default:

false

Introduced:

3.9

Enable histograms for write transactions. Refer to the Histograms from Aerospike Logs page for details.

Additional information

Here is the list of configuration enabled histograms:

Example: Set enable-benchmarks-write to true:

asinfo -v 'set-config:context=namespace;id=<namespaceName>;enable-benchmarks-write=true'
ok

enable-hist-proxy

[dynamic]
Context:

namespace

Default:

false

Introduced:

3.9

Enable histograms for proxy transactions. Refer to the Histograms from Aerospike Logs page for details.

Additional information

Here is the list of configuration enabled histograms:

Example: Set enable-hist-proxy to true:

asinfo -v 'set-config:context=namespace;id=<namespaceName>;enable-hist-proxy=true'
ok

enable-index

[dynamic]
Context:

namespace

Subcontext:

set

Default:

false

Introduced:

5.6

Setting this to true will maintain an index specific to the set, which will be used for scans of the set. Using such an index will improve performance of scans of the set if the set is very small compared to the size of its namespace. Refer to the Set Indexes documentation for further details.

Additional information

Example: Enable a set-specific index within the namespace definition in the configuration file:

  set setName {
enable-index true
}

Dynamically enable a set-specific index:

asinfo -v "set-config:context=namespace;id=namespaceName;set=setName;enable-index=true"
ok

enable-osync

[static]
Context:

namespace

Subcontext:

storage-engine device

Default:

false

Introduced:

3.3.21

Removed:

4.3.1.1

Only relevant for raw devices (not relevant for file storage). Tells the device to flush on every write. This may impact performance. Refer to the Buffering and Caching in Aerospike article for further details.

enable-xdr

[dynamic]
Context:

namespace

Default:

false

Removed:

5.0.0

This controls, at the namespace level, whether digest log entries are being written to the digest log. This therefore practically controls whether records are being shipped through XDR globally, assuming DCs are configured and available, xdr-shipping-enabled is kept at its default value (true) and the enable-xdr configuration is set to true at the XDR stanza level.
Configured DCs that are linked to namespaces will be connected to independently of the value of this setting. To prevent the connections from being made, you will need to either a) remove all seed nodes from the datacenter definition, or b) remove the datacenter from all namespace definitions, or do so dynamically to break existing connections.

Additional information

Example: Enable XDR dynamically on the namespace:

asinfo -v "set-config:context=namespace;id=namespaceName;enable-xdr=true"
ok

encryption

[enterprise][static]
Context:

namespace

Subcontext:

storage-engine device or pmem

Default:

aes-128

Introduced:

4.5.0 (device) 4.8.0 (pmem)

Options: aes-128, aes-256
Specifies the algorithm used by encryption at rest.
Related parameters are encryption-key-file and encryption-old-key-file.
Requires a feature to be enabled in the feature-key-file.

encryption-key-file

[enterprise][static]
Context:

namespace

Subcontext:

storage-engine device or pmem

Default:

N/A

Introduced:

3.15.1 (device) 4.8.0 (pmem)

Enables encryption-at-rest by specifying either the filesystem path to the user-supplied, randomly generated encryption key or the name of the Vault secret that stores that user-supplied, randomly generated encryption key. In version 5.3+, an environment variable that holds the encryption key may also be specified.

In version 5.1+, for the alternative integration with HashiCorp Vault,the value of the configuration parameter must be prefixed with literal vault: and must be followed by the name of the secret on the Vault service. For more information, see Optional security with Vault integration.

In version 5.3+, the configuration parameter can be set to env-b64:<variable_name>, and the base64-encoded key will be read from the named environment variable and decoded into binary form.

For information on how encryption-at-rest works, warnings, and other considerations, see the Configuring Encryption-at-Rest.

Related parameters are encryption and encryption-old-key-file.

Requires a feature to be enabled in the feature-key-file.

Additional information

Example: Enable encryption-at-rest for namespace test with new and old encryption files:

namespace test {
...
storage-engine device {
device /dev/sda1
...
encryption-key-file /etc/aerospike/key.dat
encryption-old-key-file /etc/aerospike/old-key.dat
}
...
}

Enable encryption-at-rest for namespace test secured via HashiCorp Vault:

namespace test {
...
storage-engine device {
device /dev/sda1
...
encryption-key-file vault:encryption-key-file-secret-name
encryption-old-key-file vault:encryption-old-key-file-secret-name
}
...
}
tip

The contents of the key file and the old key file are loaded at startup, just after parsing the configuration file. Once the Aerospike daemon is running, you may safely remove the key file and the old key file, though keep in mind you need the files to restart the Aerospike process. 5.7 and subsequent: To switch encryption-at-rest keys in a rolling fashion without zeroizing the storage devices, rename the encryption-key-file parameter to encryption-old-key-file, keeping the same value, and introduce a new encryption-key-file parameter with a different value identifying a file, environment variable, or vault repository entry containing a new encryption key. Then restart the Aerospike daemon.

caution

Prior to 5.7: Adding, removing or changing the key file requires stopping the Aerospike daemon, zeroizing the storage devices and restarting the Aerospike daemon. Migrations should complete prior to proceeding to the next node in the cluster. It is therefore possible to make such changes as a rolling fashion across a cluster.

encryption-old-key-file

[enterprise][static]
Context:

namespace

Subcontext:

storage-engine device or pmem

Default:

N/A

Introduced:

5.7

Enables encryption-at-rest key rotation by specifying either the filesystem path to the previous version of the user-supplied, randomly generated encryption key or the name of the Vault secret that stores the previous version of that user-supplied, randomly generated encryption key. An environment variable that holds the encryption key may also be specified.

For the alternative integration with HashiCorp Vault, the value of the configuration parameter must be prefixed with literal vault: and must be followed by the name of the secret on the Vault service. For more information, see Optional security with Vault integration.

The configuration parameter can be set to env-b64:<variable_name>, and the base64-encoded key will be read from the named environment variable and decoded into binary form.

For information on how encryption-at-rest works, warnings, and other considerations, see the Configuring Encryption-at-Rest.

Related parameters are encryption and encryption-key-file.

Requires a feature to be enabled in the feature-key-file.

Additional information

Example: Enable encryption-at-rest for namespace test with new and old encryption files:

namespace test {
...

storage-engine device {
device /dev/sda1
...
encryption-key-file key.dat
encryption-old-key-file /etc/aerospike/old-key.dat
}
...
}

Enable encryption-at-rest for namespace test secured via HashiCorp Vault:

namespace test {
...
storage-engine device {
device /dev/sda1
...
encryption-key-file vault:encryption-key-file
encryption-old-key-file vault:encryption-old-key-file-secret-name
}
...
}
tip

The contents of the key file and the old key file are loaded at startup, just after parsing the configuration file. Once the Aerospike daemon is running, you may safely remove the key file and the old key file, though keep in mind you need to store them safely to be able to reuse the files to restart the aerospike process. 5.7 and subsequent: To switch encryption-at-rest keys in a rolling fashion without zeroizing the storage devices, rename the encryption-key-file parameter to encryption-old-key-file, keeping the same value, and introduce a new encryption-key-file parameter with a different value identifying a file, environment variable, or vault repository entry containing a new encryption key. Then restart the Aerospike daemon.

caution

Prior to 5.7: Adding, removing or changing the key file requires stopping the Aerospike daemon, zeroizing the storage devices and restarting the Aerospike daemon. Migrations should complete prior to proceeding to the next node in the cluster. It is therefore possible to make such changes as a rolling fashion across a cluster.

evict-hist-buckets

[dynamic]
Context:

namespace

Default:

10000

Introduced:

3.8

Number of histogram buckets used for evictions. Must be between 100 and 10,000,000. Takes effect on the next eviction round.

Additional information

Example: Set evict-hist-buckets to 200000:

asinfo -v "set-config:context=namespace;id=namespaceName;evict-hist-buckets=200000"
ok
note

Each bucket costs 4 bytes of memory, so 10 Million buckets means a 40MB histogram. Note that cold-start eviction is a special case, where the number of histogram buckets used is at least 100,000. That is, 100,000 buckets are used unless the current evict-hist-buckets setting is larger.

evict-tenths-pct

[dynamic]
Context:

namespace

Default:

5

Maximum 1/10th percentage of objects to be deleted during each iteration of eviction.

Additional information

Example: Set evict-tenths-pct to 10:

asinfo -v "set-config:context=namespace;id=namespaceName;evict-tenths-pct=10"
ok

file

[static]
Context:

namespace

Subcontext:

storage-engine device or pmem

Data file path on rotational disk (using a file system) or pmem (as of version 4.8). As of 4.3.0.2, the file may include an optional 'shadow file' as a second argument.

Additional information

Example: Persist to two files:

file /mnt/disk1/myfile1.dat
file /mnt/disk2/myfile2.dat

Persist to two files (pmem):

file /mnt/pmem/myfile1.dat
file /mnt/pmem/myfile2.dat

Persist file with a shadow file:

file /mnt/pmem1/rw_file.dat /mnt/sdb1/shadow_file.dat
file /mnt/nvme0n1/rw_file.dat /mnt/sdb1/shadow_file.dat
note

As of 4.3.0.2, when requesting the configuration via the 'info' API, the key for a particular device will be 'storage-engine.file[ix]' where 'ix' is an index to identify this file with its associated statistics (such as the statistic 'storage-engine.file[ix].age').

If configured, the file's shadow file will appear as 'storage-engine.file[ix].shadow.

tip

You can specify multiple files per namespace. The directory path should exist and the user/group the Aerospike process is running under should have read/write permissions. The file itself will be created by the process.

caution

There is a maximum file size limit of 2 TiB.
You must not use both device and file in the same namespace.
There is a limit of 128 files per namespace as of version 4.2 (64 for versions down to 3.12.1 and 32 in previous versions).

filesize

[required][static]
Context:

namespace

Subcontext:

storage-engine device or pmem

Maximum size for each file storage defined in this namespace.

Prior to 4.3.0.2, the default value was 16GiB. As of 4.3.0.2, filesize is required to be set explicitly when the namespace is configured to use files.

Additional information

Supports the following suffixes:

  • K Kibibyte (KiB)

  • M Mebibyte (MiB)

  • G Gibibyte (GiB)

  • T Tebibyte (TiB)

  • P Pebibyte (PiB)

Example:

filesize 500G
note

There is a maximum limit of 2 TiB on the filesize.
Default for 2.x: 17179869184.

flush-max-ms

[dynamic]
Context:

namespace

Subcontext:

storage-engine device or pmem

Default:

1000

Introduced:

3.3.21 (device) 4.8.0 (pmem)

Configures the maximum amount of time that a Streaming Write Buffer (SWB) can go without being written to device or pmem storage file. This only becomes relevant for very low or intermittent write rates, since write buffers do get written to device (or pmem) when full. In general, changing this should not be necessary. Refer to the Buffering and Caching in Aerospike article for further details.

Additional information

Example: Set flush-max-ms to 500:

asinfo -v "set-config:context=namespace;id=namespaceName;flush-max-ms=500"
ok
note

The current buffer will be flushed if there was something in it that was not flushed, i.e. a change since last time.

fsync-max-sec

[dynamic]
Context:

namespace

Subcontext:

storage-engine device

Default:

0

Introduced:

3.3.21

Removed:

4.3.1

Setting this will cause the namespace's devices to fsync at this interval (in seconds). By default (0) the namespace's devices will not fsync (data will be flushed to physical device at the system's discretion). Refer to the Buffering and Caching in Aerospike article for further details.

Additional information

Example: Set fsync-max-sec to 1:

asinfo -v "set-config:context=namespace;id=namespaceName;fsync-max-sec=1"
ok
note

fsync pushes the data to the device from both the page cache (when using files) and the hardware cache. However this would only impact devices that have their queues configured as write back. For write through devices, the cache is never in a dirty or unflushed state and this configuration option wouldn't have any impact when using such devices raw. To check the flag for a device, cat the write_cache file, for example: cat /sys/class/block/nvme0n1/queue/write_cache.

tip

For similar functionality see direct-files.

caution

Can impact performance if set to a short interval.

high-water-disk-pct

[dynamic]
Context:

namespace

Default:

0

Data will be evicted if the disk utilization is greater than this specified percentage.

Setting this parameter to zero (which is the default) disables this threshold.

Additional information

Example: Set high-water-disk-pct to 60:

asinfo -v "set-config:context=namespace;id=namespaceName;high-water-disk-pct=60"
ok
note

Records with TTL 0 will not be evicted. Data that is set to expire first, by TTL bucket, will be first to be evicted.

For additional discussion, see Namespace Data Retention Configuration.

Prior to Aerospike version 4.9, the default was 50.

Setting this parameter to 0 in releases earlier than Aerospike 4.9 is not supported and may trigger immediate evictions.

high-water-memory-pct

[dynamic]
Context:

namespace

Default:

0

Data will be evicted if the memory utilization is greater than this specified percentage.

Setting this parameter to zero (which is the default) disables this threshold.

Additional information

Example: Set high-water-memory-pct to 60:

asinfo -v "set-config:context=namespace;id=namespaceName;high-water-memory-pct=60"
ok
note

Records with TTL 0 will not be evicted. Data that is set to expire first, by TTL bucket, will be first to be evicted.

For additional discussion, see Namespace Data Retention Configuration.

Prior to Aerospike version 4.9, the default was 60.

Setting this parameter to 0 in releases earlier than Aerospike 4.9 is not supported and may trigger immediate evictions.

ignore-migrate-fill-delay

[enterprise][dynamic]
Context:

namespace

Default:

false

Introduced:

5.2

For namespaces in storage-engine memory, setting the ignore-migrate-fill-delay parameter to true overrides migrate-fill-delay, and effectively sets it to 0. migrate-fill-delay imposes a time lag before the "fill" migration to the cluster nodes that do not normally function as replicas.

A time lag is useful for a cluster where some of the namespaces use storage-engine memory and are not persisted. This siutation requires migrations to immediately repopulate a node that won't have any other source for such repopulation when it restarts.

ignore-migrate-fill-delay is not useful for strong-consistency enabled namespaces, even with non-persisted namespaces because the roster dictates which node would normally hold a given partition, even with non persisted namespaces.

For more information, see Delaying "Fill" Migrations.

Additional information

Example: To disregard the migrate-fill-delay setting and cause nameSpaceName to begin "fill" migration: asinfo -v "set-config:context=namespace;id=nameSpaceName;ignore-migrate-fill-delay=true".

index-stage-size

[static]
Context:

namespace

Default:

1G

Introduced:

4.2.0.2

Configuration used to size the primary index arena(s).

Additional information

Configuration has to be a power of 2. Lower limit is 128MB and upper limit of 1GB prior to version 4.2.0.2, 16GB for versions 4.2.0.2 and higher. This setting will change the size of each of the 2048 (EE) or 256 (CE) possible arena stages and require a coldstart. Notation such as G for gigabytes, M for megabytes, K for kilobytes is supported.

index-type

[enterprise][static]
Context:

namespace

Default:

shmem

Introduced:

4.3.0.2 (shmem) 4.3.0.2 (flash) 4.5.0.1 (pmem)

Options: shmem, flash, pmem

If shmem, index is stored in Linux shared memory (DRAM); i.e., a cold-start is required to rebuild the node after it is rebooted.

If flash, the index is stored in a block storage device (typically NVMe SSD); i.e., a node is able to fast-restart even after being rebooted.
For sizing details, refer to the Aerospike All Flash capacity planning page.

If pmem, the index is stored in persistent memory (e.g., Intel Optane DC Persistent Memory); i.e., a node is able to fast-restart even after being rebooted.

Setting to flash for Aerospike Server versions 4.3.0.2 to 4.7 requires a feature to be enabled in the feature-key-file. Setting to pmem requires a feature to be enabled in the feature-key-file.

Additional information
note

On Community Edition, this will appear as 'undefined' and is not configurable.

level-mod

[static]
Context:

namespace

Subcontext:

geo2dsphere-within

Default:

1

Introduced:

3.7.0.1

If specified, then only cells where (level - min-level) is a multiple of "level-mod" will be used (default 1). This effectively allows the branching factor of the S2 Cell Id hierarchy to be increased. Currently the only parameter values allowed are 1, 2, or 3, corresponding to branching factors of 4, 16, and 64 respectively.

low-water-pct

[dynamic]
Context:

namespace

Default:

0

Removed:

3.3.13

Expiration/Eviction thread will not do any activity if the used percentage of memory/disk is less than the specified limit.

max-cells

[dynamic]
Context:

namespace

Subcontext:

geo2dsphere-within

Default:

12

Introduced:

3.7.0.1

Sets the maximum desired number of cells in the approximation. The maximum number of cells allowed is 256.

Additional information

Example: Changing max-cells dynamically:

asinfo -v "set-config:context=namespace;id=namespacename;geo2dsphere-within-max-cells=24"
ok
note
  • For server versions prior to 4.4, maximum allowed value is 32.

max-level

[dynamic]
Context:

namespace

Subcontext:

geo2dsphere-within

Default:

20

Introduced:

3.7.0.1

Maximum depth (number of subdivisions) to use for a single cell. This defines the minimum cell size to be used.

The allowable range for this parameter is 0 to 30. At level 20 the cell size varies from 46.4 to 97.3 square meters.

Additional information

Example: Changing max-level dynamically:

asinfo -v "set-config:context=namespace;id=namespacename;geo2dsphere-within-max-level=25"
ok
note

Cannot be set dynamically in versions prior to 4.4.

In versions prior to 5.7, the default value is 30.

max-record-size

[dynamic]
Context:

namespace

Default:

0

Introduced:

5.7.0

Specifies the maximum allowed record size in bytes. Not used if value set to default (0).

  • For storage-engine 'device' namespaces, max-record-size cannot be larger than the write-block-size.
  • For storage-engine 'pmem' namespaces, max-record-size cannot be larger than the pmem write block size, which is 8 MiB.
  • For storage-engine 'memory' namespaces, max-record-size cannot be larger than 128 MiB.
  • Any write attempt that breaches max-record-size fails with a code 13 error, fail_record_too_big.
Additional information

Example: Changing max-record-size dynamically:

asinfo -v "set-config:context=namespace;id=namespacename;max-record-size=256"
ok

max-ttl

[dynamic]
Context:

namespace

Default:

3650D (3.8.3)

Removed:

4.5.1

Maximum TTL allowed in the server. The default-ttl is not allowed to exceed this value (as of version 3.8.3). It also cannot be set higher then 10 years (3650D). max-ttl cannot be set to zero.

Additional information

Default value is 0 for versions prior to 3.8.3. Supports the following suffixes:

  • S Second

  • M Minute

  • H Hour

  • D Day

Example:

max-ttl 365D

Set max-ttl to 1500 days dynamically:

asinfo -v "set-config:context=namespace;id=namespaceName;max-ttl=1500D"
ok
tip

This is used to trap rogue clients from inserting junk values.

max-write-cache

[dynamic]
Context:

namespace

Subcontext:

storage-engine device or pmem

Default:

64M

Introduced:

4.8.0 (pmem)

Number of bytes of pending write blocks that the system is allowed to keep before failing writes. The write cache implements a circuit-breaker to throttle excessive writes. Should be multiple of write-block-size. While max-write-cache has no maximum permitted value, Aerospike recommends a maximum value of 2047M. Click ‘Show Additional Information’ for more details.

Additional information
note

The size of the write cache is calculated using the number of devices in the namespace multiplied by the value of max-write-cache. Client writes are allowed until the sum of all in-use streaming write buffers (swb) equals the calculated amount. To see how many streaming write buffers are in use, look at the write_q stat (or shadow_write_q) or directly at the write-q on the defrag log line.

Example - How write cache is calculated

The write cache size is the number of devices for the namespace on the node, multiplied by the value of max-write-cache. The cache for each device must be accounted for in the total sizing calculation.

Each device has its own write queue (write-q). Assume the following:

  • a 3-node cluster with 1 namespace and 4 devices for that namespace on each node (12 total across the cluster)
  • max-write-cache is set at the default 64 MiB and the write-block-size at 1 MiB When the sum of all pending blocks across the 4 write queues breaches 256 MiB (64 MiB x 4), the write fail: queue too deep: error and Error Code 18: Device overload are thrown.

In server versions prior to 5.1, the error codes are triggered when a single device goes above the calculated amount (64 MiB in this case, or 64 blocks of 1 MiB each).

In server version 5.1 it is a function of the number of devices. The error is thrown only when the sum of all pending blocks across all 4 write queues breaches the calculated amount. The write cache does not have to be the same size on each of the 4 devices in this example. Each could have 64 MiB (or 64 blocks) or 1 of could have 256 MiB (or 256 blocks) and the other three are keeping up and are at 0.

If you configure max-write-cache to 128 MiB and have 10 devices on the namespace on each node, you need to account for potentially using up 128 MiB x 10 = 1280 MiB of RAM in case you go all the way to that value.

Tip When the queue grows beyond the configured limit and device overload errors appear you can dynamically increase the max-write-cache limit with the following example command.

asinfo -v 'set-config:context=namespace;id=namespaceName;max-write-cache=128M'

For more details, see the Log Reference and Resilience.

memory-size

[required][dynamic]
Context:

namespace

Maximum amount of memory for the namespace. Cannot be reduced by more than 50% of previously set value. See Capacity Planning for namespace sizing details.

Prior to 4.3.0.2, the default value was 4GiB. As of 4.3.0.2, memory-size is required to be explicitly configured, with a minimum of 1MiB.

Additional information

Supports the following suffixes:

  • K Kibibyte (KiB)

  • M Mebibyte (MiB)

  • G Gibibyte (GiB)

  • T Tebibyte (TiB)

  • P Pebibyte (PiB)

Example:

memory-size 120G

Set memory-size to 10G dynamically:

asinfo -v "set-config:context=namespace;id=namespaceName;memory-size=10G"
ok
note

This is not a hard limit. A namespace's used memory could go above this threshold under some specific situations. The memory-size value is mainly used to infer the high-water-memory-pct and stop-writes-pct. This should be set accordingly to the total available memory on the instance (leaving enough for the OS) and the memory allocated to other namespaces. An empty and unused namespace would still allocate 1GiB of shared memory (Enterprise Edition).

migrate-order

[dynamic]
Context:

namespace

Default:

5

Introduced:

3.7.5

Number between 1 and 10 which determines the order namespaces are to be processed when migrating. Namespaces are processed in ascending order (lowest to highest) according to this configuration.

Additional information

Example: Set migrate-order to 1:

asinfo -v "set-config:context=namespace;id=namespaceName;migrate-order=1"
ok
note

A namespace with a higher migrate-order may still make some progress before namespaces with lower migrate-order have completed. Here is an explanation for this behavior:
Migration happen in units of partition.
A partition is ready to migrate out (emigrate) if:
a. the node is a replica and the partition needs to be send to the master for merging.
b. the node is a master for the partition and has received and merged all different versions of the partition from the replica.

So on a node, even if a namespace has a lower migrate-order, if it is master for a partition, it will have to wait for replicas to send it their copies of this partition before it can emigrate the merged partition back to the replicas. To maintain strict migrate-order the node will have to just wait and do nothing. However to speed up the entire migration process, we choose to allow this node to emigrate higher migrate-order namespace partitions if they are ready.

migrate-retransmit-ms

[dynamic]
Context:

namespace

Default:

5000

Introduced:

3.11

How long to wait for success, in milliseconds, before retrying a migration related transaction. In versions prior to 3.10.1, this is actually governed by the transaction-retry-ms configuration. In version 3.10.1, even though migrate-retransmit-ms is honored and set to 5000ms, it cannot be retrieved through the info protocol and cannot be set.

Additional information

Example: Set migrate-retransmit-ms to 2500:

asinfo -v "set-config:context=namespace;id=namespaceName;migrate-retransmit-ms=2500"
ok

migrate-sleep

[dynamic]
Context:

namespace

Default:

1

Introduced:

3.7.5

Number of microseconds to sleep after each record migration. This parameter can be decreased to 0 in order to speed up migrations. Refer to manage migrations for further details.

Additional information

Example: Set migrate-sleep to 0:

asinfo -v "set-config:context=namespace;id=namespaceName;migrate-sleep=0"
ok

min-avail-pct

[dynamic]
Context:

namespace

Subcontext:

storage-engine device or pmem

Default:

5

Introduced:

3.1.10 (device) 4.8.0 (pmem)

Disallow writes (except deletes, replica writes and migration writes) when device_available_pct on one of the devices (or pmem files) configured for the namespace is below this specified percentage.

Additional information
note

Writes will also be disallowed when the memory utilization for the namespace hits the configured stop-writes-pct.

caution

We do not recommend changing this value below 5%. Doing so may not allow enough buffer room for replica writes and migrations writes. This may lead to not having enough free blocks for defrag to recover the system, in which case the node would need to cold start to recover.

min-level

[dynamic]
Context:

namespace

Subcontext:

geo2dsphere-within

Default:

1

Introduced:

3.7.0.1

Minimum depth (number of subdivisions) to use for a single cell. This defines the maximum cell size to be used.

The allowable range for this parameter is 0 to 30. At level 1 the cell size is 21,252,753 square kilometers.

Additional information

Example: Changing min-level dynamically:

asinfo -v "set-config:context=namespace;id=namespacename;geo2dsphere-within-min-level=5"
ok
note

Cannot be set dynamically in versions prior to 4.4.

mount

[enterprise][static]
Context:

namespace

Subcontext:

index-type flash, index-type pmem

Introduced:

4.3.0.2 (flash) 4.5.0.1 (pmem)

Path to the mount directory (typically on NVMe SSD). There may be more than one mount per namespace. Although not recommended, a mount may be shared with other namespaces. For sizing details when using index-type flash, refer to the Capacity Planning page.

When using index-type pmem with auto-pin numa, configured mounts that are not on the local NUMA node are ignored. Therefore, different instances of Aerospike server running on different NUMA nodes may share the same configured mounts without the operator needing to determine which mounts are on which NUMA nodes.

Additional information
note

When requesting the configuration via the 'info' API, the key for a particular mount will be storage-engine.mount[ix] where 'ix' is an index to identify this mount with its associated statistics (such as the statistic index-type.mount[ix].age).

mounts-high-water-pct

[enterprise][dynamic]
Context:

namespace

Subcontext:

index-type flash, index-type pmem

Default:

0

Introduced:

4.3.0.2 (flash) 4.5.0.1 (pmem)

Data will be evicted if the mount's utilization is greater than this specified percentage (of mounts-size-limit).

Setting this parameter to zero (which is the default) disables this threshold.

Additional information
note

For additional discussion, see Namespace Data Retention Configuration.

Prior to Aerospike version 4.9, the default was 80.

Setting this parameter to 0 in releases earlier than Aerospike 4.9 is not supported and may trigger immediate evictions.

mounts-size-limit

[enterprise][required][dynamic]
Context:

namespace

Subcontext:

index-type flash, index-type pmem

Introduced:

4.3.0.2 (flash) 4.5.0.1 (pmem)

Maximum amount of total device space for the mount(s) on this namespace. For example, if there are two mount point of 100GB each, then mounts-size-limit should be set to 200GB. The minimum size is 4 GiB and maximum is to not exceed to the total capacity of the all the mount point[s]. This does not prevent sprigs to be allocated beyond the limit, but rather enforces the eviction of records based on the mounts-high-water-pct configuration which is measured against the index usage (based on the number of records rather than the number of sprigs). Refer to the All Flash Capacity Sizing for further details.

Required to be explicitly set when using index-type flash or index-type pmem.

namespace

[static]
Context:

namespace

Note: this is namespace in the namespace context, not namespace in the XDR context. Search for namespace, and look at the Context heading to make sure you are working with the correct parameter.

Defines a namespace. For more information, see Namespace Configuration.

Additional information

Example: To define namespace someNameSpaceName:

...
namespace someNameSpaceName {
...
memory-size 256G
replication-factor 2
storage-engine device {
...
...
}

}
...
caution

There is a limit on the number of namespaces in a cluster. See Upper Sizing Bounds and Naming Constraints.

ns-forward-xdr-writes

[dynamic]
Context:

namespace

Default:

false

Introduced:

3.3.26

Removed:

5.0.0

In Aerospike 5.0, this parameter was replaced by forward. This parameter provides fine grained control at namespace level to forward writes that originated from another XDR to the specified destination datacenters (in xdr section). This parameter is effective when the forward-xdr-writes in the xdr section is set to false. If the forward-xdr-writes in xdr section is set to true, all the namespaces will be forwarded irrespective of the namespace-level setting (ns-forward-xdr-writes).

Additional information

Example: Enable ns-forward-xdr-writes on the namespace:

asinfo -v "set-config:context=namespace;id=namespaceName;ns-forward-xdr-writes=true"
ok
note

To dynamically change you must target ASD's service port, not XDR.

caution

If setting to 'true' be aware of your topology and ensure you aren't creating a forwarding loop.

nsup-hist-period

[dynamic]
Context:

namespace

Default:

3600

Introduced:

4.5.1

The interval (secs) at which the object size histograms, as well as the time-to-live (ttl) histogram, are updated. Setting nsup-hist-period to a value of 0 will disable these histogram updates. Refer to the histogram info command for further details on the object size and ttl histograms.

Additional information
note

If nsup-hist-period is set to zero dynamically, subsequent info commands to get an object size or ttl histogram will, if any exist, return the last histogram generated.

nsup-period

[dynamic]
Context:

namespace

Default:

0

Introduced:

4.5.1

The interval at which the main expiration/eviction thread (called nsup, the namespace supervisor) wakes up to process the namespace. The default value of nsup-period 0 disables the namespace supervisor for the namespace.

By default, the value is in seconds. You can also set this value in minutes, hours, or days, with notation like 1m or 1h or 1d. For additional discussion, see Namespace Data Retention Configuration.

Additional information

Example: Set nsup-period to 600 seconds dynamically for a namespace:

asinfo -v "set-config:context=namespace;id=namespaceName;nsup-period=600"
ok
note

If nsup-period is dynamically set to zero while nsup is working, nsup will finish its current cycle and then become dormant.

Be sure that time is synchronized across nodes in a cluster. For Aerospike Server 4.5.1 or later, for each namespace where nsup is enabled (that is, nsup-period not zero), writes are suspended if cluster clock skew exceeds 40 seconds. Make sure that the Network Time Protocol (NTP) or other time synchronization mechanism is installed, configured, and functioning properly.

Prior to Aerospike version 4.9, the default was 120.

caution

As of Aerospike version 4.9, the server will not start if nsup-period is 0 (the default) but default-ttl is non-zero, unless allow-ttl-without-nsup is set true.

nsup-threads

[dynamic]
Context:

namespace

Default:

1

Introduced:

4.5.1

The number of dedicated expiration/eviction threads for nsup to use when processing the namespace. Must be at least 1, and at most 128.

Additional information

Example: Set nsup-threads to 3 dynamically for a namespace:

asinfo -v "set-config:context=namespace;id=namespaceName;nsup-threads=3"
ok
note

If nsup-threads is dynamically changed while nsup is working, nsup will finish its current cycle and then apply the new thread count with the next cycle.

num-partitions

[static]
Context:

namespace

Subcontext:

si

Default:

32

Configuration to alter the number of secondary index trees that are used for query lookups.

Additional information

Increasing this configuration reduces depth of sindex trees, and may help secondary index lookups perform better. However, increasing these will also result in memory overheads, so it is recommended to monitor the memory utilization and benchmark when tuning this configuration.

obj-size-hist-max

[dynamic]
Context:

namespace

Default:

100

Removed:

4.2.0.2

This controls the objsz histogram increment size and, therefore, controls the maximum size of records that are covered in the objsz histogram. An nsup cycle must run for an updated objsz histogram to be generated after this value is changed.
Removed as of version 4.2.0.2. Refer to the histogram info command for additional details.

Additional information

Example: Set obj-size-hist-max to 200:

asinfo -v "set-config:context=namespace;id=namespaceName;obj-size-hist-max=200"
ok
note

The histogram always represents 100 buckets. Each bucket has a size of rblock * obj-size-hist-max/100. This would cover records up to 100 x 128Bytes or 12,5KiB with the default value of 100. Changing this to a value of 1000 would mean that the histogram will have 100 buckets, each of size 128 * 1000 / 100= 1280 bytes. Any value specified in the config or set dynamically that is not a multiple of 100 will be rounded up to the nearest 100. See hist-dump for more details.

partition-tree-locks

[static]
Context:

namespace

Default:

8

Introduced:

3.11

Removed:

4.2

Number of lock pairs (tree lock and reduce lock) per partition. Removed as of version 4.2 (hard-coded to the max 256 in that version and above). Must be an exact power of 2, between 1 and 256. Must not exceed partition-tree-sprigs. Providing more locks reduces potential contention between searches for different records (tree lock) as well as between a create/delete and a reduce (reduce lock).

Additional information
note

Per namespace memory overhead: there is a fixed base size of 64K plus 1M per 16 partition-tree-sprigs and 320K per partition-tree-locks. Additionally the Enterprise Edition also requires an extra 320K per 16 partition-tree-sprigs to support fast restart.

A good minimal guideline is to stay with the default of 8 until the cluster size exceeds 15, then double it at every cluster size doubling (16 for cluster sizes 16 to 31, 32 for cluster sizes 32 to 63, etc). Indeed, the larger the cluster is, the fewer partition each node will own, creating more potential contention on the locks.

partition-tree-sprigs

[static]
Context:

namespace

Default:

256

Introduced:

3.11

Number of tree sprigs per partition to use. Default value is 256 for versions 4.2 and above. Must be an exact power of 2. Common workloads and use cases would benefit from 4096 or 8192 sprigs. For workloads potentially requiring more (values higher than 32K), Enterprise Edition licensees should contact Aerospike support for guidance. Even if the memory overhead seems acceptable, configuring too many sprigs may not only provide no benefits, but could actually adversely affect a cluster:

  • A sub-cluster would have to accommodate for all the sprigs that were in the larger cluster (except if min-cluster-size has been configured to prevent the formation of such sub-cluster).
  • The memory required would also have to be contiguous (fragmented memory may prevent the allocation of memory).
  • Having too many sprigs on a node could delay shut down and cause an unnecessary cold restart upon the subsequent restart.

    Changing this configuration parameter will force a cold start. Providing more trees (sprigs) reduces the number of levels and speeds up the search. It also causes the reduce lock blockage to be broken up (the reduce lock is unlocked between each sprig, and a sprig takes much less time to traverse than a single partition tree).
Additional information

Example: A 4-node cluster, replication-factor 2, 2048 partition-tree-sprigs. For versions < 4.2 using 8 partition-tree-locks. For versions >= 4.2, hard-coded to 256 partition-tree-locks per-partition.

For release 4.2 and above, the per-node namespace memory overhead for sprigs is:

Community Edition:  64K + (8M x 2 + 8B x 2048 x 4096 x 2) / 4  = 64K + 4M + 32M = 36.06M
Enterprise Edition: 64K + (8M x 2 + (8B + 5B) x 2048 x 4096 x 2) / 4 = 64K + 4M + 32M + 20M = 56.06M

For releases prior to 4.2, the per-node namespace memory overhead for sprigs is:

Community Edition:  64K + 2.5M + 128M = 130.56M
Enterprise Edition: 64K + 2.5M + 128M + 40M = 170.56M
note

Versions 4.2 and above:

  • Sprigs have a default and minimum value of 256, as high as 256M.
  • The value of partition-tree-locks is now hard-coded to 256 per-partition. Each lock-pair is 8 bytes.
  • Each sprig is 8 bytes. Additionally, the Enterprise Edition also requires 5 bytes for each sprig.
  • Sprigs and locks are only allocated for partitions that are owned by the node. Therefore, as a cluster gets bigger, the overhead per node decreases.
    For versions prior to 4.2:
  • Sprigs can be set as low as 16, as high as 4096, but must be greater than partition-tree-locks.
  • Namespace memory overhead per-node: there is a fixed base size of 64K plus 1M per 16 partition-tree-sprigs and 320K per partition-tree-locks. Additionally, the Enterprise Edition also requires an extra 320K per 16 partition-tree-sprigs to support fast restart.
  • Users who can afford the extra memory overhead should change this to at least 256 (overhead of 21M for Enterprise Edition) and may even want to go all the way to the maximum of 4096 (336M overhead for Enterprise Edition), to anticipate future growth (as changing this parameter will force a cold start).

post-write-queue

[dynamic]
Context:

namespace

Subcontext:

storage-engine device

Default:

256

Write block buffers to keep as cache (per device). Only available for non data-in-memory storage configurations. Maximum allowed value for versions prior to 3.16 is 2048, for versions 3.16 through 4.6 is 4096, and for versions 4.7 and above is 8192. Refer to the cache_read_pct value for how much of the read workload is being served by the post write queue. XDR use cases would leverage the post-write-queue as writes would be quasi-immediately read to be shipped to the destination(s) cluster(s). The read-page-cache configuration parameter can also be considered to leverage page cache and help with latency for read intensive workloads.

Additional information

Example: Set post-write-queue to 512:

asinfo -v "set-config:context=namespace;id=namespaceName;post-write-queue=512"
ok
note

Memory allocation for this depends on the write-block-size and number of devices. For example, on a namespace with 2 devices and a 128KiB write block size, the default memory allocated will be 2 x 256 x 128KiB. For example, for 2 devices, setting the value to 2048 will use 2 x 2048 x 128KiB (512MiB). Also, note that wblocks that are in post-write-queue are not eligible to be defragmented. Therefore the post-write-queue should be kept small compared to the overall device size as the size allocated to the post-write-queue will not be defragmented.

prefer-uniform-balance

[enterprise][unanimous] [dynamic]
Context:

namespace

Default:

true

Introduced:

4.3.0.2

If true, this namespace will make an effort to distribute partitions evenly to all nodes. As of Aerospike Server version 4.7 the default value is true. To achieve uniform-balance, Aerospike must give up some migration performance for this namespace. Time required to complete migrations is only impacted when a node is either permanently added or removed; i.e., the time to complete migrations when a restarted node rejoins the cluster is not impacted.

Has to be followed by a recluster command to be effective.

For strong-consistency enabled namespaces, uniform-balance is computed for all nodes in the roster - if a node is offline, the balance will be less uniform (but likely better than without uniform-balance enabled). If the node is permanently down, or down for an extended duration, the administrator may choose to remove the offline node from the roster and issue a recluster command to readjust the partition distribution back to a uniform-balance.

Additional information

Example: Enable prefer-uniform-balance on the namespace:

Admin+> asinfo -v "set-config:context=namespace;id=namespaceName;prefer-uniform-balance=true"
aero-node1:3000 (10.0.3.41) returned:
ok

aero-node2:3000 (10.0.3.224) returned:
ok

aero-node4:3000 (10.0.3.196) returned:
ok

aero-node3:3000 (10.0.3.149) returned:
ok

Admin+> asinfo -v "recluster:"
aero-node1:3000 (10.0.3.41) returned:
ok

aero-node2:3000 (10.0.3.224) returned:
ignored-by-non-principal

aero-node4:3000 (10.0.3.196) returned:
ignored-by-non-principal

aero-node3:3000 (10.0.3.149) returned:
ignored-by-non-principal
note

If any node in the cluster does not have the prefer-uniform-balance set to true, the cluster reverts to not using the uniform balance scheme.

caution

For versions 4.3.0.2 to 4.3.0.9, enabling prefer-uniform-balance on cluster sizes which are a power of 2 (2, 4, 8, 16, etc) would cause migrations to be stuck.

For versions prior to 4.7.0, enabling prefer-uniform-balance in AP namespaces and not waiting for delta migrations to complete between node restarts in a rolling restart could cause non-optimal masters may be selected (which could lead to extra duplicate resolution on writes, and extra stale reads if read duplication resolution is not enabled).

rack-id

[enterprise][dynamic]
Context:

namespace

Default:

0

Introduced:

3.13.0.1 (post cluster protocol change)

If this namespace should be rack-aware, which rack should this node be a part of. rack-id must be a positive integer, with a max possible value of 1000000. For strong-consistency enabled namespaces, the rack-id configuration is set through the roster itself. Refer to the Configure Rack-Aware in Strong Consistency Mode page for further details.

Additional information

Example:

rack-id 1

Set rack-id to 1 dynamically:

asinfo -v "set-config:context=namespace;id=namespaceName;rack-id=1"
ok

Set rack-id for multiple nodes at once. Note: for clarity, this command is shown across multiple lines with the backslash character , but you should enter it as a single line.

Admin+> asinfo -v "set-config:context=namespace;id=test;rack-id=101" \
with 192.168.10.2 192.168.10.4 192.168.10.5
node2.aerospike.com:3000 (192.168.10.2) returned:
ok
node5.aerospike.com:3000 (192.168.10.5) returned:
ok
node4.aerospike.com:3000 (192.168.10.4) returned:
ok

Set rack-id for strong consistency. Note: for clarity, this command is shown across multiple lines with the backslash character , but you should enter it as a single line.

Admin+> asinfo -v "roster-set:namespace=test; \
nodes=BB9070016AE4202@102,BB9060016AE4202@101, \
BB9050016AE4202@101,BB9040016AE4202@101,BB9020016AE4202@102
node2.aerospike.com:3000 (192.168.10.2) returned:
ok
...
Admin+> asinfo -v "recluster:"
...

read-consistency-level-override

[dynamic]
Context:

namespace

Default:

off

Introduced:

3.3.26

When set to a non-default value, overrides the client-specified per-transaction read consistency level for this namespace. This configuration specifies whether the server is to consult internally the different versions of a record to determine the most-recent record value when duplicate resolving in an ongoing migration.
Values: off, one, all.
See the discussion of SC guarantee in Strong Consistency Mode.

Additional information

Example: Set read consistency level override to one in the configuration file (skip duplicate resolution):

read-consistency-level-override one

Dynamically override clients and set read consistency to one:

asinfo -v "set-config:context=namespace;id=namespaceName;read-consistency-level-override=one"
ok
note

strong-consistency enabled namespaces always duplicate resolve when migrations are ongoing and consult the different potential versions of a record before returning to the client. This configuration is therefore not available for strong-consistency enabled namespaces.

read-page-cache

[dynamic]
Context:

namespace

Subcontext:

storage-engine device

Default:

false

Introduced:

4.3.1

If true, disables the odirect and odsync flags during read transactions. This allows the OS to leverage page cache and can help with latencies for some workload types. Should be tested or deployed on a single node prior to full production roll out. This configuration should not be set true for namespaces with data-in-memory set true. It may be useful to set read-page-cache to true if using raw devices, or if using file storage with data-in-memory set false and direct-files or commit-to-device set true. Refer to the Buffering and Caching in Aerospike article for further details.

Additional information

Example: Set read-page-cache to true dynamically:

asinfo -v "set-config:context=namespace;id=namespaceName;read-page-cache=true"
ok
note

Performant storage sub-systems running on older kernels may be adversely impacted by this setting as checking the page cache prior to accessing the storage sub-system may be penalizing.
Workload with higher cache_read_pct may be considered, but should also check the impact of increasing the post-write-queue configuration parameter. Less performant storage sub-systems (network attached for example) may greatly benefit from disabling the odirect and odsync flags.

tip

Using read-page-cache when the read workload is very uniform (no hotkey type patterns) may not be beneficial and could lead into spending unnecessary CPU cycles, which should usually be negligible.

reject-non-xdr-writes

[enterprise][dynamic]
Context:

namespace

Default:

false

Introduced:

5.0.0

Parameter to control the writes done by a non-XDR client. Setting it to true disallows writes from a non-XDR client (any regular client library).

This parameter is on the destination or target node in the namespace stanza, not the xdr stanza's dc's namespace sub-stanza.

This parameter is useful to control accidental writes by a non-XDR client to a namespace when it is not expected, and can be used for namespaces taking writes exclusively from XDR clients. When set to true, error code 10 will be returned and will tick the fail_xdr_forbidden statistic.

Additional information

Example: Namespace stanza on XDR destination:

namespace someNameSpaceName {
reject-non-xdr-writes true
...
}

Set reject-non-xdr-writes to true:

asinfo -v "set-config:context=namespace;id=namespaceName;reject-non-xdr-writes=true"
ok

reject-xdr-writes

[enterprise][dynamic]
Context:

namespace

Default:

false

Introduced:

5.0.0

Parameter to control whether to accept write transactions originating from an XDR client. Setting it to true disallows all the writes from an XDR client (at a destination cluster) and allow only non-XDR clients to write transactions.

This parameter is on the destination or target node in the namespace stanza, not the xdr stanza's dc's namespace sub-stanza.

This parameter is useful to control accidental writes by an XDR client. When set to true, error code 10 will be returned, disallowed writes will not be relogged by XDR and will tick the fail_xdr_forbidden statistic on the remote (destination) cluster.

Additional information

Example: Namespace stanza on XDR destination:

namespace someNameSpaceName {
reject-xdr-writes true
...
}

Set reject-xdr-writes to true:

asinfo -v "set-config:context=namespace;id=namespaceName;reject-xdr-writes=true"
ok

replication-factor

[unanimous] [static]
Context:

namespace

Default:

2

Number of copies of a record (including the master copy) maintained in the entire cluster.

Additional information

Example: Set the namespace's replication factor to 3 dynamically (version 6.0 and later, AP namespaces only):

asinfo -v 'set-config:context=namespace;id=namespaceName;replication-factor=3'
ok
note

For versions prior to 3.15.1.3, the effective replication factor is returned under the repl-factor name. For versions 3.15.1.3 and later, the effective replication factor is returned under effective_replication_factor.

For versions 6.0 and later, replication-factor is dynamic for AP namespaces (non strong-consistency).

caution

Changes to replication-factor require a full cluster restart, except for AP namespaces with version 6.0 and later, when replication-factor may be changed dynamically.

scheduler-mode

[static]
Context:

namespace

Subcontext:

storage-engine device

Default:

(set by system)

Optional I/O scheduler for non-NVMe drives (SSD or HDD).

Additional information

serialize-tomb-raider

[enterprise][static]
Context:

namespace

Subcontext:

storage-engine device or pmem

Default:

false

Introduced:

4.3.0 (device) 4.8.0 (pmem)

Prevent different namespaces' tomb raids from running concurrently.

set

[static]
Context:

namespace

Begins a set context, set must be followed by the set name.

set-delete

[unanimous] [dynamic]
Context:

namespace

Default:

false

Introduced:

3.6.1

Removed:

After 3.12

Replaced by info command truncate as of version 3.12. Refer to the truncate info command for details. Setting it to true will delete the specified set in the namespace. Resets to false after deletion occurs.
For more information on deleting sets, see Managing Sets

Additional information

Example: Enable set-delete on the set:

asinfo -v "set-config:context=namespace;id=namespaceName;set=setname;set-delete=true"
ok

set-disable-eviction

[unanimous] [dynamic]
Context:

namespace

Subcontext:

set

Default:

false

Introduced:

3.6.1

Removed:

5.6

Setting it to true will protect the set from evictions. Setting this parameter does not affect the TTL of records within the set. Records can have a TTL and will expire as normal.

This parameter was renamed to disable-eviction in version 5.6.

Additional information

Example: Enable set-disable-eviction on the set:

asinfo -v "set-config:context=namespace;id=namespaceName;set=setname;set-disable-eviction=true"
ok

/* Setting parameter under namespace definition in a static manner.

set set1 {
set-disable-eviction true
}
set set2 {
set-disable-eviction true
}
set test {
set-disable-eviction true
}

note

Eviction may well happen at startup and, as such, it is good practice to enter protected sets into aerospike.conf as shown above to prevent a protected set being evicted during cold start.

set-enable-xdr

[dynamic]
Context:

namespace

Subcontext:

set

Default:

use-default

Removed:

5.0.0

Replaced in Aerospike 5.0 by ship-only-specified-sets and ignore-set.

Set-specific parameter to enable/disable shipping through XDR.

Additional information

If set to 'use-default', it inherits the behavior from sets-enable-xdr. If set to 'true', XDR will ship this set (overriding sets-enable-xdr). If set to 'false', XDR will not ship this set (overriding sets-enable-xdr).

Example: Changing set-enable-xdr dynamically:

asinfo -v "set-config:context=namespace;id=namespaceName;set=setname;set-enable-xdr=true"
ok
asinfo -v "set-config:context=namespace;id=namespaceName;set=setname;set-enable-xdr=false"
ok

set-evict-hwm-count

[dynamic]
Context:

namespace

Subcontext:

set

Default:

0 (Disabled)

Removed:

3.6.1

How many records may reside in this set before the server begins evicting records from this set.

set-stop-write-count

[dynamic]
Context:

namespace

Subcontext:

set

Default:

0 (Disabled)

Removed:

3.6.1

How many records may be in this set before the server begins rejecting writes to this set.

set-stop-writes-count

[dynamic]
Context:

namespace

Subcontext:

set

Default:

0 (Disabled)

Introduced:

3.7.0.1

Removed:

5.6

How many records may be in this set before the server begins rejecting writes to this set.

This parameter was renamed to stop-writes-count in version 5.6.

Additional information

The set-stop-writes-count parameter will only take effect when the number of records reaches the threshold configured. Once the threshold is reached, clients will get Error Code 22 (AEROSPIKE_ERR_FAIL_FORBIDDEN) back.

Example: Dynamically set the count to two thousands:

asinfo -v "set-config:context=namespace;id=namespaceName;set=setname;set-stop-writes-count=2000"

sets-enable-xdr

[dynamic]
Context:

namespace

Default:

true

Removed:

5.0.0

Replaced in Aerospike 5.0 by ship-only-specified-sets.

Specifies whether XDR should ship all sets in a namespace or not.

Additional information

This setting can be overridden at set level by the set-enable-xdr parameter.

Example: Set sets-enable-xdr dynamically to false:

asinfo -v "set-config:context=namespace;id=namespaceName;sets-enable-xdr=true"
ok

si

[static]
Context:

namespace

Begins a si (Secondary Index) context, si must be followed by the secondary index name.

si-gc-max-units

[dynamic]
Context:

namespace

Subcontext:

si

Default:

1000

Removed:

3.14.0

Removed in version 3.14.0 and above. Refer to sindex-gc-period and sindex-gc-max-rate. Maximum number of elements we walk in the index tree for garbage-collection in one cycle.
Use gc-max-units for dynamic config and si-gc-max-units in the configuration file.

Additional information

Example:

            indexname=<index>;gc-max-units=10000"

si-gc-period

[dynamic]
Context:

namespace

Subcontext:

si

Default:

1000

Removed:

3.14.0

Removed in version 3.14.0 and above. Refer to sindex-gc-period and sindex-gc-max-rate. Interval, in milliseconds, between two iterations of index garbage collection.
Use gc-period for dynamic config and si-gc-period in the configuration file.

Additional information

Example:

asinfo -v "set-config:context=namespace;id=<namespace>;\
indexname=<index>;gc-period=100"

si-tracing

[dynamic]
Context:

namespace

Subcontext:

si

Removed:

3.14.0

The value that indicates the level of global tracing for this index.

sindex-startup-device-scan

[static]
Context:

namespace

Subcontext:

storage-engine device

Default:

false

Introduced:

5.3.0

At startup, build secondary indexes by scanning devices.

If most records in the namespace are in sets with secondary indexes, setting this configuration true will very likely speed up the secondary index rebuild. Whether this will be faster also depends on other factors, such as average record size, and number of configured devices. Ultimately, experimentation is the best way to determine whether to set this configuration or not.

Additional information
caution

sindex-startup-device-scan and data-in-memory cannot both be configured true.

single-bin

[unanimous] [static]
Context:

namespace

Default:

false

Setting it true will disallow multiple bin (columns) for a record.

Additional information
note

Used to save storage space and provide enhanced performance on update transactions where prior read is not required. Transactions such as List append, Map key-value update or increment operation still require a read. Requires storage reinitialization. Single-bin with data-in-memory does not allow storing user key (sendKey true). To store user-key with single-bin, storage must not configure data-in-memory true. For UDF transactions against single-bin namespaces, the bin name is required to be an empty string for reading or writing the bin (for versions 3.15 and above only, for previous versions, UDFs are not supported against single-bin namespaces). For further recommendation for this parameter, contact Aerospike.

single-query-threads

[dynamic]
Context:

namespace

Default:

4

Introduced:

6.0.0

Maximum number of threads allowed for a single query. Value range: 1-128.

Additional information

Example: Set single-query-threads to 12:

asinfo -v "set-config:context=namespace;id=namespaceName;single-query-threads=12"
ok

single-scan-threads

[dynamic]
Context:

namespace

Default:

4

Introduced:

4.7.0

Removed:

6.0.0

Maximum number of threads allowed for a single query. Value range: 1-128.

Additional information

Example: Set single-scan-threads to 12:

asinfo -v "set-config:context=namespace;id=namespaceName;single-scan-threads=12"
ok

This parameter was renamed to single-query-threads in server 6.0.0

stop-writes-count

[dynamic]
Context:

namespace

Subcontext:

set

Default:

0 (Disabled)

Introduced:

5.6

How many records may be in this set before the server begins rejecting writes to this set.

This parameter was renamed from set-stop-writes-count in version 5.6.

Additional information

The stop-writes-count parameter will only take effect when the number of records reaches the threshold configured. Once the threshold is reached, clients will get Error Code 22 (AEROSPIKE_ERR_FAIL_FORBIDDEN) back.

Example: Dynamically set the count to two thousand:

asinfo -v "set-config:context=namespace;id=namespaceName;set=setName;stop-writes-count=2000"

stop-writes-pct

[dynamic]
Context:

namespace

Default:

90

Disallow writes when memory utilization (tracked under memory_used_bytes) is above this specified percentage:

  • This threshold is checked every 10 seconds.
  • Deletes, replica writes, and migration writes are still allowed.
Additional information

Example: Set stop-writes-pct to 95:

asinfo -v "set-config:context=namespace;id=namespaceName;stop-writes-pct=95"
ok
note

Writes are also disallowed when one of the namespace's device available percent on disk gets down to min-avail-pct. Refer to the stop_writes and clock_skew_stop_writes metrics for even more details on all the different situations leading to putting a node in read only mode.

storage-engine

[static]
Context:

namespace

Default:

memory

Determines whether writes are persisted or not, accepted values are:

  • device - Data written to this node will be persisted to either a raw device or a file.
  • memory - Data written to this node will only write to DRAM.
  • pmem - Data written to this node will be written to persistent memory (Enterprise Edition only, and requires a feature to be enabled in the feature-key-file).
Additional information

Example: To define an In-Memory Only Namespace:

storage-engine memory

To define a Persisted Namespace:

storage-engine device {
...
}

To define a Persistent Memory Namespace:

storage-engine pmem {
...
}

strict

[static]
Context:

namespace

Subcontext:

geo2dsphere-within

Default:

true

Introduced:

3.7.0.1

Additional sanity check from Aerospike to validate whether the points returned by S2 falls under the user's query region. When set to false, Aerospike does not do this additional check and send the results as it is.

strong-consistency

[enterprise][static]
Context:

namespace

Default:

false

Introduced:

4.0

Set the namespace to Strong Consistency mode to favor consistency over availability. Allows linearized reads to be enabled. Refer to the Configuring Strong Consistency and Consistency Management pages for further details.
Requires a feature to be enabled in the feature-key-file.

Additional information
note

Changing an Available mode (AP) namespace into a Strong Consistency mode (SC) namespace by simply turning the feature on in the configuration is not supported. In order to create a strongly consistent namespace, the storage needs to be emptied. Migrating into an SC namespace can be done by performing a backup on an AP namespace and restoring into an SC namespace.

strong-consistency-allow-expunge

[enterprise][dynamic]
Context:

namespace

Default:

false

Introduced:

4.0

When set to true, allows non-durable deletes to be used with strong-consistency. Expunges are not 'consistent'.

Additional information

Example:

Admin+> asinfo -v "set-config:context=namespace;id=bar;strong-consistency-allow-expunge=true"
172.17.0.10:3000 (172.17.0.10) returned:
ok

0e0d1a1651ae:3000 (172.17.0.9) returned:
ok

tomb-raider-eligible-age

[enterprise][dynamic]
Context:

namespace

Default:

86400

Introduced:

3.10

Number of seconds to retain a tombstone, even though it's discovered to be safe to remove. This is to protect a cluster from older records to be re-introduced after a node that was out of the cluster for some time joins the cluster back. If a node was out of a cluster for longer than the tomb-raider-eligible-age, it should have all of its data removed before being brought back in the cluster. Default is 1 day.

Additional information

Example: Set tomb-raider-eligible-age to 43200 (1/2 day):

asinfo -v "set-config:context=namespace;id=namespaceName;tomb-raider-eligible-age=43200"
ok

tomb-raider-period

[enterprise][dynamic]
Context:

namespace

Default:

86400

Introduced:

3.10

Minimum amount of time, in seconds, between tomb-raider runs. Default is 1 day.

As of version 4.3.0, setting tomb-raider-period to a value of 0 will deactivate tomb raider.

Additional information

Example: Set tomb-raider-period to 43200 (1/2 day):

asinfo -v "set-config:context=namespace;id=namespaceName;tomb-raider-period=43200"
ok
note

If tomb-raider-period is set to zero dynamically while a tomb raid is in progress, the tomb raid will complete and then the tomb raider will become dormant.

tomb-raider-sleep

[enterprise][dynamic]
Context:

namespace

Subcontext:

storage-engine device or pmem

Default:

1000

Introduced:

3.10.0 (device) 4.8.0 (pmem)

Number of microseconds to sleep in between large block reads on disk or pmem storage files. Default is 1 ms (1000µs).

Additional information

Example: Set tomb-raider-sleep to 2000:

asinfo -v "set-config:context=namespace;id=namespaceName;tomb-raider-sleep=2000"
ok

transaction-pending-limit

[dynamic]
Context:

namespace

Default:

20

Introduced:

4.3.1.3

Maximum pending transactions that can be queued up to work on the same key. A value of 0 removes the limit (unlimited), and a value of 1 will allow a maximum of 1 transaction to be queued up in the rw-hash behind a transaction that is already in progress. This parameter context was moved from service to namespace in version 4.3.1.3.

Additional information

Example: Set transaction-pending-limit to 3 dynamically:

asinfo -v "set-config:context=namespace;id=namespaceName;transaction-pending-limit=3"
ok

Prior to 4.3.1.3, run this instead:

asinfo -v "set-config:context=service;transaction-pending-limit=3"
ok
note

Increase this limit if the application works on a small set of keys more frequently. If this value is exceeded the overflow transactions will fail and the client will receive an error code 14 Key Busy (tracked on the server side under the fail_key_busy statistic).

truncate-threads

[dynamic]
Context:

namespace

Default:

4

Introduced:

4.6.0

The number of dedicated threads to use for truncations in the namespace. Must be at least 1, and at most 128.

Additional information

Example: Set truncate-threads to 6 dynamically for a namespace:

asinfo -v "set-config:context=namespace;id=namespaceName;truncate-threads=6"
ok
note

If truncate-threads is dynamically changed, it will not affect any currently active truncation, and will be effective beginning with the next truncation round.

write-block-size

[static]
Context:

namespace

Subcontext:

storage-engine device

Default:

1M

Size in bytes of each I/O block that is written to the disk. This effectively sets the maximum object size. The maximum allowed size is 8388608 (or 8M) for versions 4.2 and higher. For versions prior to 4.2, the maximum allowed size is 1048576 (or 1M). Larger write-block-size may adversely impact performance. Refer to the FAQ - Write Block Size knowledge base article for other details.

Additional information

Supports the following suffixes:

  • K Kibibyte (KiB)

  • M Mebibyte (MiB)

Example:

write-block-size 128K
note

Recommendations:

  • SSD: 131072 (128K)
  • HDD: 1048576 (1M).
    Adjust block size to make it efficient for I/Os.

    For pmem, this configuration is not available as the write-block-size is hard coded to 8MiB.

write-commit-level-override

[dynamic]
Context:

namespace

Default:

off

Introduced:

3.3.26

When set to a non-default value, overrides the client-specified per-transaction write commit level for this namespace.
Values: off, all, master.
See the discussion of SC guarantee in Strong Consistency Mode.

Additional information

Example: Set write commit level override to master in the configuration file (return upon master side completion without waiting for replica side):

write-commit-level-override master

Dynamically override clients and set write commit level to master:

asinfo -v "set-config:context=namespace;id=namespaceName;write-commit-level-override=master"
ok
note

Starting with Aerospike 5.7, this policy has a circuit breaker. When configured master and the fabric layer is unable to keep up with the replication, it automatically converts to all in order to push back on the client and protect the service.

Starting with Aerospike 3.16.0.1, when configured to master, transactions will not wait for the replica write ack, avoiding potential latency increases when receiving multiple transactions for the same key that would otherwise be queued up on the rw hash (rw_in_progress).

When configured to all, in case of failure to replicate properly (either node owning master copy not able to reach replica or able to reach it but response from replica not received), a timeout will be returned to the client but the transaction will not be rolled back on the master side and the replica side may or may not have the update (based on where exactly the transaction broke between master and replica). Refer to transaction-max-ms for details on this mechanism.

strong-consistency enabled namespaces always write (or attempt to write) to all replicas prior to returning to the client This configuration is therefore not available for strong-consistency enabled namespaces. For strong consistency use cases, refer to the strong-consistency configuration parameter.

xdr-bin-tombstone-ttl

[enterprise][dynamic]
Context:

namespace

Default:

86400

Introduced:

5.2.0

If bin-policy is set to ship changed bins (policies other than the default all), bin deletions will create bin tombstones. This parameter specifies the time-to-live (in seconds) for those bin tombstones. 0 means never expire. Bin tombstones whose TTL expired will be removed only on a subsequent write operation on the record. The default value in version 5.2.x used to be 0 and it changed to 86400 (1 day) as of 5.3.

Additional information

Example: Set xdr-bin-tombstone-ttl to 600 seconds:

asinfo -v "set-config:context=namespace;id=namespaceName;xdr-bin-tombstone-ttl=600"
ok

xdr-remote-datacenter

[dynamic]
Context:

namespace

Removed:

5.0.0

As of Aerospike 5.0, replaced by the dc parameter.

Name of the datacenter to forward this namespace to.

Additional information

The xdr-remote-datacenter parameter should be defined for each remote datacenter XDR is to ship to. This can be set dynamically as of version 3.8.1.

The Datacenter names are defined in the XDR stanza.

Example: Dynamically associating and disassociating a namespace to a remote datacenter:

asinfo -v "set-config:context=namespace;id=namespaceName;xdr-remote-datacenter=DC1;action=add"
asinfo -v "set-config:context=namespace;id=namespaceName;xdr-remote-datacenter=DC1;action=remove"
note

It is not safe to dynamically remove a remote datacenter in versions prior to 5.x. Please contact Aerospike support for further input if a datacenter has to be dynamically removed in such older versions.

xdr-tomb-raider-period

[enterprise][dynamic]
Context:

namespace

Default:

120

Introduced:

5.0.0

Minimum amount of time, in seconds, between xdr-tomb-raider runs. Default is 120 seconds. This only applies to xdr_tombstones and not regular tombstones from durable deletes. Setting xdr-tomb-raider-period to a value of 0 will deactivate the xdr-tomb-raider.

Additional information

Example: Set xdr-tomb-raider-period to 500:

asinfo -v "set-config:context=namespace;id=namespaceName;xdr-tomb-raider-period=500"
ok

xdr-tomb-raider-threads

[enterprise][dynamic]
Context:

namespace

Default:

1

Introduced:

5.0.0

The number of dedicated threads used by the xdr-tomb-raider to clear xdr_tombstones.

Additional information

Example: Set xdr-tomb-raider-threads to 4:

asinfo -v "set-config:context=namespace;id=namespaceName;xdr-tomb-raider-threads=4"
ok

network

access-address

[static]
Context:

network

Subcontext:

service

Default:

service address if specified or list of available IP addresses

An access address is an IP address that is announced to clients and used by clients for connecting to the cluster. Because of NAT, a cluster node's access addresses may be different from its bind addresses (address configuration directive under the service stanza). If not specified, will use the IP in ( address configuration directive under the service stanza). If the service address is set to 'any' then access-address will be a list of all available IP addresses (this is not recommended if there are multiple IP addresses). Multiple access addresses can be specified. IPv4, IPv6 and DNS names can be used to specify access addresses. DNS names are expanded to all IP addresses they resolve to, IPv4 (A DNS resource records) as well as IPv6 (AAAA DNS resource records). A different set of access addresses can also be specified through alternate-access-address for example for XDR clients that may not be able to reach the cluster through the same IP addresses as the local clients. Finally, in versions EE 3.11 and above, TLS equivalent are exposed through tls-access-address and tls-alternate-access-address. If access-address is not specified, the bind addresses (through the address config) will be published to clients.

Additional information

The info service-clear-std command will return a node's access address(es) and the peers-clear-std command will return a node's peers access address(es) in a cluster. Multiple access addresses can be specified.

Example:

  service {
----
access-address 10.0.0.104
access-address 10.0.0.103
---
}
caution

In versions prior to 3.10, multiple entries cannot be listed. IPv6 and DNS entries also cannot be specified for those older versions. If NAT is used, the 'virtual' keyword must be added after the access-address configuration for versions between 3.3.26 and 3.10.

access-port

[static]
Context:

network

Subcontext:

service

Default:

service port

Port number associated with access-address. If not specified, it defaults to the port value in the service stanza.

address

[static]
Context:

network

Subcontext:

service

Default:

any (in config)

The IP address at which the server listens for client connections. Set this value to any for the server to listen on all the IP addresses available on the machine. Set this value to the interface name (i.e.: eth0, eth1) when using the auto-pin feature.

Additional information
caution

For versions prior to 3.7.0.1, a local instance of XDR will not be able to connect if this is not set to any.

address

[static]
Context:

network

Subcontext:

heartbeat

Default:

address value in 'service'

IP address for cluster-state heartbeat communication for mesh. Also used for multicast mode as of version 3.10 to specify which interface(s) to send heartbeats from. In versions prior to 3.10, used to specify multicast group. The default value in multicast mode in versions prior to 3.10 was 239.1.99.222. For versions prior to 3.10, multicast send interface was defined using interface-address.

address

[static]
Context:

network

Subcontext:

fabric

Default:

address value in 'service'

IP address at which the server listens (binds) for fabric traffic (inter node communication, for replica writes, migrations, duplicate resolution and more).

alternate-access-address

[static]
Context:

network

Subcontext:

service

Introduced:

3.10

Can be used to choose a specific IP address or DNS name that will be published as an alternate list for clients to connect (other than the one based on address & access-address). XDR can make use of this by specifying dc-use-alternate-services true for versions previous to 5.0.0, and use-alternate-access-address for versions 5.0.0 and later. Replaces alternate-address as of version 3.10.

Additional information

Typically, this is used to isolate clients based on public/private address or NATted environments like cloud deployments. Ability to specify a DNS name gives extra benefits.

alternate-access-port

[static]
Context:

network

Subcontext:

service

Default:

access-port

Port number associated with alternate-access-address. If not specified, it defaults to the access-port value.

alternate-address

[static]
Context:

network

Subcontext:

service

Introduced:

3.7.1

Removed:

3.10

Use alternate-access-address as of version 3.10. Can be used to choose a specific IP address or DNS name that will be published as an alternate list for clients to connect (other than the one based on address & access-address). XDR can make use of this by specifying dc-use-alternate-services true.

Additional information

Typically, this is used to isolate clients based on public/private address or NATted environments like cloud deployments. Ability to specify a DNS name gives extra benefits.

ca-file

[enterprise][static]
Context:

network

Subcontext:

tls

Introduced:

3.15

Path to the CA file needed for mutual authentication. Only one of ca-file or ca-path is required. For XDR TLS connections, one of the 2 is mandatory. Defaults to the system's default (/etc/ssl/certs/cacert.pem on Ubuntu) except for XDR where it should be set if needed.

Additional information

Example:

ca-file <path to file>

ca-path

[enterprise][static]
Context:

network

Subcontext:

tls

Introduced:

3.15

Path to the directory of the CA file for mutual authentication. Requires openssl rehash <path to directory> command be ran on the ca-path directory containing the CA certs. Only one of ca-file or ca-path configuration is required. For XDR TLS connections, one of the 2 is mandatory. Defaults to the system's default (/etc/ssl/certs on Ubuntu) except for XDR where it should be set if needed.

Additional information

Example:

ca-path <path to directory>

cert-blacklist

[enterprise][static]
Context:

network

Subcontext:

tls

Introduced:

3.15

Path to the file containing rogue certificates serial numbers. Use this when there is a need to revoke or blacklist rogue certificates. The blacklist is automatically updated/reloaded when changed on subsequent connections if the file itself changes.

Additional information

Example:

tls-cert-blacklist <path to file>

cert-file

[enterprise][static]
Context:

network

Subcontext:

tls

Introduced:

3.15

Path to the TLS certificate file when TLS is enabled. Certificate will be automatically reloaded on subsequent connections if the file itself changes. The above dynamic certificate rotation feature did not apply to all tls configurations. Fabric and Heartbeat tls certificates on versions prior to 4.7.0.5, 4.6.0.8, 4.5.3.10, 4.5.2.10, 4.5.1.15, 4.5.0.19 required a rolling restart to rotate expired certificates. ECDSA private keys and certificates rotations, and password-protected private keys rotations were also not supported until those same versions.

In version 5.1+, for the alternative integration with HashiCorp Vault, the value of the configuration parameter must be prefixed with literal vault: and must be followed by the name of the secret on the Vault service. For more information, see Optional security with Vault integration.

In version 5.3+, the configuration parameter can be set to env-b64:<variable_name>, and the base64-encoded cert data will be read from the named environment variable and decoded into binary form.

When specified via Vault or environment variable, this parameter is read when the server starts and is not re-read thereafter.

Additional information

Example:

cert-file <path to file>

channel-bulk-fds

[static]
Context:

network

Subcontext:

fabric

Default:

2

Introduced:

3.11.1.1

Number of bulk channel sockets to open to each neighbor node. Twice this number of sockets per neighbor will be opened since the neighbor nodes will open the same number of sockets back to this node.

Additional information
note

Minimum: 1

Maximum: 128. Exceeding this maximum will prevent the server from starting.

channel-bulk-recv-threads

[dynamic]
Context:

network

Subcontext:

fabric

Default:

4

Introduced:

3.11.1.1

Number of threads processing intra-cluster messages arriving through the bulk channel. This channel is used for record migrations during rebalance.

Additional information

Example: Set channel-bulk-recv-threads to 6 dynamically:

asinfo -v "set-config:context=network;fabric.channel-bulk-recv-threads=6"
ok
note

Minimum: 1
Maximum: 128. Exceeding this maximum will prevent the server from starting.

channel-ctrl-fds

[static]
Context:

network

Subcontext:

fabric

Default:

1

Introduced:

3.11.1.1

Number of control channel sockets to open to each neighbor node. Twice this number of sockets per neighbor will be opened since the neighbor nodes will open the same number of sockets back to this node.

Additional information
note

Minimum: 1
Maximum: 128. Exceeding this maximum will prevent the server from starting.

channel-ctrl-recv-threads

[dynamic]
Context:

network

Subcontext:

fabric

Default:

4

Introduced:

3.11.1.1

Number of threads processing intra-cluster messages arriving through the control channel. This channel is used to distribute cluster membership change events as well as partition migration control messages.

Additional information

Example: Set channel-ctrl-recv-threads dynamically to 6:

asinfo -v "set-config:context=network;fabric.channel-ctrl-recv-threads=6"
ok
note

Minimum: 1
Maximum: 128. Exceeding this maximum will prevent the server from starting.

channel-meta-fds

[static]
Context:

network

Subcontext:

fabric

Default:

1

Introduced:

3.11.1.1

Number of meta channel sockets to open to each neighbor node. Twice this number of sockets per neighbor will be opened since the neighbor nodes will open the same number of sockets back to this node.

Additional information
note

Minimum: 1
Maximum: 128. Exceeding this maximum will prevent the server from starting.

channel-meta-recv-threads

[dynamic]
Context:

network

Subcontext:

fabric

Default:

4

Introduced:

3.11.1.1

Number of threads processing intra-cluster messages arriving through the meta channel. This channel is used to distribute System Meta Data (SMD) after cluster change events.

Additional information

Example: Set channel-meta-recv-threads dynamically to 6:

asinfo -v "set-config:context=network;fabric.channel-meta-recv-threads=6"
ok
note

Minimum: 1
Maximum: 128. Exceeding this maximum will prevent the server from starting.

channel-rw-fds

[static]
Context:

network

Subcontext:

fabric

Default:

8

Introduced:

3.11.1.1

Number of read/write channel sockets to open to each neighbor node. Twice this number of sockets per neighbor will be opened since the neighbor nodes will open the same number of sockets back to this node.

Additional information
note

Minimum: 1
Maximum: 128. Exceeding this maximum will prevent the server from starting.

channel-rw-recv-pools

[static]
Context:

network

Subcontext:

fabric

Default:

1

Introduced:

5.1

Number of thread pools for multiple epolls (Linux system call for scalable I/O event notification) for the fabric read/write receive channel.

Should be used only when TLS is configured.

As of version 5.1, configuration parameter channel-rw-recv-threads must be a multiple of channel-rw-recv-pools.

channel-rw-recv-threads

[dynamic]
Context:

network

Subcontext:

fabric

Default:

16

Introduced:

3.11.1.1

Number of threads processing intra-cluster messages arriving through the rw (read/write) channel. This channel is used for replica writes, proxies, duplicate resolution, and various other intra-cluster record operations.

Minimum: 1.
Maximum: 128. Exceeding this maximum will prevent the server from starting.

As of version 5.1, configuration parameter channel-rw-recv-threads must be a multiple of channel-rw-recv-pools.

Additional information

Example: Set channel-rw-recv-threads to 24 dynamically:

asinfo -v "set-config:context=network;fabric.channel-rw-recv-threads=24"
ok

cipher-suite

[enterprise][static]
Context:

network

Subcontext:

tls

Introduced:

3.15

Ciphers to includes. This is not set by default on Aerospike and would revert to what the system uses, usually ALL:!aNULL:!eNULL.

Additional information

Example:

cipher-suite ALL:!COMPLEMENTOFDEFAULT:!eNULL
note

The parameter follows the same format as OpenSSL see OpenSSL Documentation.

connect-timeout-ms

[dynamic]
Context:

network

Subcontext:

heartbeat

Default:

500

Introduced:

5.3

Node connection timeout within the cluster, in milliseconds. This timeout also applies to establishing and accepting TLS connections.

Note this value must be at least 50, and cannot be larger than one-third the product of heartbeat.interval and heartbeat.timeout

Additional information

Example: Set heartbeat.connect-timeout-ms to 1200:

asinfo -v 'set-config:context=network;heartbeat.connect-timeout-ms=1200'
ok

disable-localhost

[static]
Context:

network

Subcontext:

service

Default:

false

Introduced:

5.6

When set true, the service will not listen on localhost.

interface-address

[static]
Context:

network

Subcontext:

heartbeat

Removed:

3.10

Refer to the 3.10 network page for details post version 3.10. For versions prior to 3.10: IP address published by the node to receive heartbeats messages. If both address and interface-address are not specified, the service subcontext interface/IP address will be used.

interval

[dynamic]
Context:

network

Subcontext:

heartbeat

Default:

150

Interval in milliseconds between which heartbeats are sent. From version 3.10.0.3, interval can be set to a minimum value of 50 and a maximum of 600000 (10 minutes).

Additional information

Example: Set heartbeat.interval to 250:

asinfo -v 'set-config:context=network;heartbeat.interval=250'
ok

For releases prior to 3.9.1:

asinfo -v 'set-config:context=network.heartbeat;interval=250'
ok
note

Increasing heartbeat.interval will increase the tolerance of the cluster to minor network fluctuations, however, it will also mean that the cluster reacts more slowly to a genuine cluster event. In the event of a genuine cluster event, a higher heartbeat.interval time will mean that it takes longer for the cluster to acknowledge the node has left and, as a result, there may be a greater impact on the application. This setting will contribute to the calculated quantum interval. The quantum interval is 20% of the product of heartbeat.timeout and heartbeat.interval. The total time to detect a node failure on the client side would be: (heartbeat.interval x heartbeat.timeout) + 20% (heartbeat.interval x heartbeat.timeout) + Client_tend_interval. In general, though, given proper client policy settings for retries, clients would still be able to reach one of the nodes in the cluster which may then result in a proxy to the correct node.

keepalive-enabled

[static]
Context:

network

Subcontext:

fabric

Default:

true

Introduced:

3.5.12

Enables the nodes to send keep-alive messages to each other.

keepalive-intvl

[static]
Context:

network

Subcontext:

fabric

Default:

1

Introduced:

3.5.12

Interval in seconds between successive keep-alive packets.

Additional information
note

If you set this keep-alive parameter to a non-positive number, the node does not override the corresponding Linux kernel system default for the parameter.

keepalive-probes

[static]
Context:

network

Subcontext:

fabric

Default:

1

Introduced:

3.5.12

Maximum number of keep-alive packets the node sends succession before declaring the socket dead.

Additional information
note

If you set this keep-alive parameter to a non-positive number, the node does not override the corresponding Linux kernel system default for the parameter.

keepalive-time

[static]
Context:

network

Subcontext:

fabric

Default:

1

Introduced:

3.5.12

Time in seconds from the last user data packet sent on the socket before sending the first keep-alive packet.

Additional information
note

If you set this keep-alive parameter to a non-positive number, the node does not override the corresponding Linux kernel system default for the parameter.

key-file

[enterprise][static]
Context:

network

Subcontext:

tls

Introduced:

3.15

Path to the key file when TLS is enabled. Certificate will be automatically reloaded on subsequent connections if the file itself changes. The above dynamic certificate rotation feature did not apply to all tls configurations. Fabric and Heartbeat tls certificates on versions prior to 4.7.0.5, 4.6.0.8, 4.5.3.10, 4.5.2.10, 4.5.1.15, 4.5.0.19 required a rolling restart to rotate expired certificates. ECDSA private keys and certificates rotations, and password-protected private keys rotations were also not supported until those same versions.

In version 5.1+, for the alternative integration with HashiCorp Vault, the value of the configuration parameter must be prefixed with literal vault: and must be followed by the name of the secret on the Vault service. For more information, see Optional security with Vault integration.

In version 5.3+, the configuration parameter can be set to env-b64:<variable_name>, and the base64-encoded key will be read from the named environment variable and decoded into binary form.

When specified via Vault or environment variable, this parameter is read when the server starts and is not re-read thereafter.

Additional information

Example:

key-file <path to file>
caution

In Aerospike Server versions 5.0 and 5.1, if an XDR datacenter is configured to use a TLS specification that includes key-file but does not include key-file-password the system will crash. This problem is corrected by hotfixes to these versions available from the Download page.

key-file-password

[enterprise][static]
Context:

network

Subcontext:

tls

Introduced:

4.3.1

Password for the key-file. This directive has the following possible formats:

  • env:FKPWD - the password will be read from environment variable FKPWD
  • file:/path_to/fkpwd - the password will be read from file /path_to/fkpwd
  • vault:name_of_secret_in_vault - the password will be read from the name of the secret where it is stored in Vault.

In version 5.1+, for the alternative integration with HashiCorp Vault, the value of the configuration parameter must be prefixed with literal vault: and must be followed by the name of the secret on the Vault service. For more information, see Optional security with Vault integration.

This parameter is read when the server starts and is not re-read thereafter.

Additional information

Example:

key-file-password file:<path to keyfile pwd>
caution

In Aerospike Server versions 5.0 and 5.1, if an XDR datacenter is configured to use a TLS specification that includes key-file but does not include key-file-password the system will crash. This problem is corrected by hotfixes to these versions available from the Download page.

latency-max-ms

[enterprise][static]
Context:

network

Subcontext:

fabric

Default:

5

Introduced:

3.13.0

Maximum latency in milliseconds between nodes that the clustering system will tolerate. Used, to derive quantum interval which helps to determine cluster reformation time after cluster event. Increasing this value can increase the amount of time it takes for a new cluster to form.

This value is also used in the HLC (Hybrid Logical Clock) when determining if an event happened before or after another event. If two events occur less than this value apart, the ordering is indeterminate.

The impact of this parameter on cluster reformation after cluster events is discussed in detail in the What is the Quantum Interval article. Changing this value may be appropriate in certain scenarios whereby intra-node network latency is necessarily high. Enterprise Licensees should consult with Aerospike Support before changing this configuration.

Additional information
note

Allowable range is 0 to 1000.

mcast-ttl

[static]
Context:

network

Subcontext:

heartbeat

Default:

0

Removed:

3.10

TTL for multicast packets.

Additional information
note

IP multicast datagrams are sent with a time-to-live (TTL) of 1 by default. In Aerospike configuration "0" means use the default which is 1. Multicast datagrams with initial TTL 1 are restricted to the same subnet.

mesh-address

[static]
Context:

network

Subcontext:

heartbeat

Removed:

3.3.19

Mesh address on which cluster nodes (other than primary) communicate. Applies only when mode is mesh.

mesh-port

[static]
Context:

network

Subcontext:

heartbeat

Removed:

3.3.19

Mesh port on which cluster nodes (other than primary) communicate for inter-node communication. Applies only when mode is mesh.

mesh-seed-address-port

[static]
Context:

network

Subcontext:

heartbeat

Default:

false

Introduced:

3.3.19

Mesh address (host-name or IP) and port info for seed server(s). These are other addresses from the cluster that Aerospike will bootstrap from. A new line is required for each additional boot strap. Applies only when mode is mesh.

Additional information

Example:

mesh-seed-address-port 10.10.0.116 3002
mesh-seed-address-port aerospike_a_0 3002
note

Note: for server versions 3.9.0.3 and earlier, only IP addresses are honored in this configuration.

caution

When using fully qualified names in versions 4.3.1 and earlier, names that would not DNS resolve could cause clusters to split if the DNS server slows down and the name resolution takes longer to fail. A successful DNS resolution replaces the name with the IP address until the subsequent restart.

mode

[unanimous] [static]
Context:

network

Subcontext:

heartbeat

May be either multicast or mesh. In case of multicast, all cluster nodes must be in the same subnet.

Additional information

Example:

mode multicast
caution

Changes to heartbeat mode require a cluster restart.

mtu

[enterprise][dynamic]
Context:

network

Subcontext:

heartbeat

Default:

0

Introduced:

3.9.1

For the underlying network, returns the maximum transmission unit (MTU) detected by the heartbeat system.

Additional information
note

Allowed value is any integer.

multicast-group

[static]
Context:

network

Subcontext:

heartbeat

Default:

239.1.99.222

Introduced:

3.10

IP address for cluster-state heartbeat communication over multicast.

multicast-ttl

[static]
Context:

network

Subcontext:

heartbeat

Default:

0

Introduced:

3.10

TTL for multicast packets.

Additional information
note

IP multicast datagrams are sent with a time-to-live (TTL) of 1 by default. In Aerospike configuration "0" means use the default which is 1. Multicast datagrams with initial TTL 1 are restricted to the same subnet.

network-interface-name

[static]
Context:

network

Subcontext:

service

Removed:

3.10

The name of the interface to attach to. Removed as of version 3.10. Use node-id-interface configuration at the global service (Not the Network service) level as a replacement to have the 'Node ID' generated based on specific interface's MAC address.

Additional information
tip

Used if the network interface is not one of eth, wlan, bond. For example eno167777736. This will lock in the interface (and 'Node's IP') that will be bound to for the service. It will also be the interface from which the MAC Address will be used to generate the 'Node ID'. Finally, this interface will also be used for heartbeat and fabric, unless if specified under the heartbeat subcontext through the address and interface-address configurations.

port

[static]
Context:

network

Subcontext:

service

Default:

3000 (in config)

The port at which the server listens for client connections.

port

[static]
Context:

network

Subcontext:

info

Default:

3003 (in config)

Port used for info management. Responds to ASCII commands.

Removing the info stanza from the configuration file disables the port.

When security is enabled in Enterprise Edition, this port is disabled for info commands. This port will still be open on the Operating System but not used by Aerospike.

port

[static]
Context:

network

Subcontext:

heartbeat

Default:

9918 (in multicast config)

Port for cluster-state communication (mesh or multicast).

port

[static]
Context:

network

Subcontext:

fabric

Default:

3001 (in config)

Port for inter-node communication within a cluster.

protocol

[unanimous] [dynamic]
Context:

network

Subcontext:

heartbeat

Default:

v3 (v. 3.14.0)

Heartbeat protocol version to be used by cluster. Should be one of v1, v2, v3 or none. Protocol can only be changed on all nodes at once. In version 3.9.1.1 and below first Clients traffic should be paused, protocol should be set to none and then protocol should be set to the new version.

Additional information
  • v1 = Original protocol version

  • v2 = Expandable cluster size protocol version (depends on paxos-max-cluster-size)

  • v3 = Improved cluster management and flexible cluster size (removes paxos-max-cluster-size dependency). Introduced in version 3.10.0.3.

  • none = Used only for dynamically changing protocol

Example: For releases after 3.10.0.3:

Set heartbeat.protocol to v3.

asinfo -v 'set-config:context=network;heartbeat.protocol=v3'
ok

For releases prior to 3.9.1.1:

Client traffic must be stopped, protocol should be changed to none, and then set heartbeat.protocol to v2.

asinfo -v 'set-config:context=network;heartbeat.protocol=none'
ok
asinfo -v 'set-config:context=network;heartbeat.protocol=v2'
ok

protocols

[enterprise][static]
Context:

network

Subcontext:

tls

Default:

TLSv1.2

Introduced:

3.15

TLS protocol versions to include. The default is to only allow TLS protocol version 1.2.

Additional information

Example:

protocols  -all,+TLSv1.2
note

In version 4.6 the default protocols configuration parameter was changed from "-all,+TLSv1.2" to "TLSv1.2".

reuse-address

[static]
Context:

network

Subcontext:

service

Default:

true

Removed:

3.10

Removed (now always true). Was to avoid "address in use" network socket bind error when restarting Aerospike service due to TIME_WAIT state.

send-threads

[static]
Context:

network

Subcontext:

fabric

Default:

8

Introduced:

3.11.1.1

Number of intra-node send threads to be used. The send-threads operate across all fabric channels.

Additional information
note

Minimum: 1
Maximum: 128. Exceeding this maximum will prevent the server from starting.

timeout

[dynamic]
Context:

network

Subcontext:

heartbeat

Default:

10

Number of missing heartbeats after which the remote node will be declared dead. As of version 3.11, values lower than 3 are not allowed as this would potentially lead to very frequent timeout which could destabilize a cluster.

Additional information

Example: Set heartbeat.timeout to 20:

asinfo -v 'set-config:context=network;heartbeat.timeout=20'
ok

For releases prior to 3.9.1:

asinfo -v 'set-config:context=network.heartbeat;timeout=20'
ok
note

Increasing heartbeat.timeout will increase the tolerance of the cluster to minor network fluctuations, however, it will also mean that the cluster reacts more slowly to a genuine cluster event. In the event of a genuine cluster event, a higher heartbeat.timeout will mean that it takes longer for the cluster to acknowledge the node has left and, as a result, there may be a greater impact on the application. This setting will contribute to the calculated quantum interval. The quantum interval is 20% of the product of heartbeat.timeout and heartbeat.interval. The total time to detect a node failure on the client side would be: (heartbeat.interval x heartbeat.timeout) + 20% (heartbeat.interval x heartbeat.timeout) + Client_tend_interval. In general, though, given proper client policy settings for retries, clients would still be able to reach one of the nodes in the cluster which may then result in a proxy to the correct node.

tls

[enterprise][static]
Context:

network

Subcontext:

tls

Introduced:

3.15

Definition of TLS parameters for a given tls-name. Can be <cluster-name> (literally), <hostname> (literally) or user defined. Refer to the TLS Configuration Manual for further details.

Additional information

Example:

tls <cluster-name> {
cert-file path-to-cert-file
key-file path-to-key-file
}

tls-access-address

[enterprise][static]
Context:

network

Subcontext:

service

Default:

any

Introduced:

3.11

TLS equivalent of access-address.

tls-access-port

[enterprise][static]
Context:

network

Subcontext:

service

Default:

tls-port

Transport Layer Security (TLS) equivalent of access-port.

tls-address

[enterprise][static]
Context:

network

Subcontext:

service,heartbeat,fabric

Introduced:

3.11

Bind address for TLS, the IP address at which the server listens for client connections, heartbeat connections or fabric connections (based on the subcontext this is set at). Similar to address when not using TLS. Will default to any if not set.

tls-alternate-access-address

[enterprise][static]
Context:

network

Subcontext:

service

Introduced:

3.11

TLS equivalent of alternate-access-address.

tls-authenticate-client

[enterprise][static]
Context:

network

Subcontext:

service

Default:

any

Introduced:

3.15

The TLS authentication mode you want to run the server with in regards to the service (Client connections). Refer to the TLS Configuration Manual for further details. Multiple tls-authenticate-client directives can be specified.

Additional information

Options:

There are three modes you can have TLS configured, standard authentication (server only), mutual authentication (TLS client and TLS server), mutual authentication with subject validation. If not specified will default to any (mutual authentication without subject validation).

  • false: Use this for only client authenticating the server.

  • any: Use this is for a two way (mutual) authentication, both client and server need to be authenticated. Also check configs ca-file and ca-path when set to this mode.

  • user-defined: Use this for two way (mutual) authentication along with subject validation. This would be the TLS name a cluster node would expect clients to present on incoming connections.

    Note: false and any are incompatible with each other and incompatible with a subject name, so if false or any is used, then there can only be one tls-authenticate-client directive.
    Note: There isn't any tls-authenticate-client for heartbeat and fabric. They always validate the subject name in their peer's certificate and expect it to match the TLS name.

Example:

service {
<...>
tls-authenticate-client remote-xdr-dc.aerospike.com
tls-authenticate-client local-clients.aerospike.com
<...>
}

tls-cafile

[enterprise][static]
Context:

network

Subcontext:

service

Introduced:

3.11.

Removed:

3.15

Removed, replaced by ca-file in the tls sub-stanza as of version 3.15. Path to tls-cafile needed when tls-mode is authenticate-both. Only one of tls-cafile or tls-capath is required. Defaults to the system's default (/etc/ssl/certs/cacert.pem on Ubuntu).

Additional information

Example:

tls-cafile <path to file>

tls-capath

[enterprise][static]
Context:

network

Subcontext:

service

Introduced:

3.11

Removed:

3.15

Removed, replaced by ca-path in the tls sub-stanza as of version 3.15. Path to the directory of the cafile. This config is needed when tls-mode is authenticate-both. Only one of tls-cafile or tls-capath config is required. Defaults to the system's default (/etc/ssl/certs on Ubuntu).

Additional information

Example:

tls-capath <path to directory>

tls-cert-blacklist

[enterprise][dynamic]
Context:

network

Subcontext:

service

Introduced:

3.11

Removed:

3.15

Removed, replaced by cert-blacklist in the tls sub-stanza as of version 3.15. Path to the file containing rogue certificates serial numbers. Use this when there is a need to revoke or blacklist rogue certificates.

Additional information

Example:

tls-cert-blacklist <path to file>
asinfo -v 'set-config:context=service;tls-cert-blacklist=.../aerospike/blacklist.txt'

tls-certfile

[enterprise][static]
Context:

network

Subcontext:

service

Introduced:

3.11

Removed:

3.15

Removed, replaced by cert-file in the tls sub-stanza as of version 3.15. Path to the certificate if using authenticate-server or authenticate-both as the tls-mode.

Additional information

Example:

tls-certfile <path to file>

tls-cipher-suite

[enterprise][static]
Context:

network

Subcontext:

service

Introduced:

3.11

Removed:

3.15

Removed, replaced by cipher-suite in the tls sub-stanza as of version 3.15. Ciphers to includes.

Additional information

Example:

tls-cipher-suite ALL:!COMPLEMENTOFDEFAULT:!eNULL

tls-keyfile

[enterprise][static]
Context:

network

Subcontext:

service

Introduced:

3.11

Removed:

3.15

Removed, replaced by key-file in the tls sub-stanza as of version 3.15. Path to the key file if using authenticate-server or authenticate-both as the tls-mode config.

Additional information

Example:

tls-keyfile <path to file>

tls-mesh-seed-address-port

[enterprise][static]
Context:

network

Subcontext:

heartbeat

Default:

false

Introduced:

3.15

TLS mesh address (host-name or IP) and port info for seed server(s). These are other addresses from the cluster that Aerospike will bootstrap from. A new line is required for each additional boot strap. Applies only when mode is mesh.

Additional information

Example:

tls-mesh-seed-address-port 10.10.0.116 3012
tls-mesh-seed-address-port aerospike_a_0 3022
note

Note: For server version 3.9.0.3 and earlier, only IP addresses are honored in this configuration.

tls-mode

[enterprise][static]
Context:

network

Subcontext:

service

Default:

authenticate-server

Introduced:

3.11

Removed:

3.15

Removed, replaced by tls-authenticate-client as of version 3.15. The tls-mode you want to run the server with.

Additional information

Options:

There are three modes you can have TLS configured, authenticate-server, authenticate-both and encrypt-only. If not specified will default to authenticate-server.

  • authenticate-both: Use this is for a two way authentication, both client and server need to be authenticated. Also check configs tls-cafile and tls-capath when set to this mode.

  • authenticate-server: Use this for only client authenticating the server.

  • encrypt-only: This only encrypts the data on the transport layer, there will be no authentication on either the server or the client. "encrypt-only" is not supported for XDR traffic.

tls-name

[enterprise][static]
Context:

network

Subcontext:

service,heartbeat,fabric

Introduced:

3.11

For versions 3.15 and above, this parameter specifies which TLS parameters to use for the given context TLS connections. The TLS parameters are configured under the matching tls sub-stanza. This also implicitly specifies the TLS name the node will present on incoming client connections. Refer to TLS Name Clarification for further details.

Additional information

For versions 3.11 to 3.14, this directly sets the tls-name to be used as the other TLS related parameters are directly set within the same sub-stanza for those older versions. This can either be set to:

  • <cluster-name> (literally) which will then pick the cluster-name defined in the Aerospike config file.

  • <hostname> (literally) which will then pick up the hostname from the system.

  • User specific where any string can be picked, for example, my-tls-name.

    This should match the certificate as well as what the client will be sending. Refer to the TLS Guide for more information.

Example:

tls-name <cluster-name>
tls-name <hostname>
tls-name my-tls-name

tls-port

[enterprise][static]
Context:

network

Subcontext:

service,heartbeat,fabric

Introduced:

3.11

Port that is TLS enabled at which the server listens for client connections, heartbeat connections or fabric connections (based on the subcontext this is set at).

tls-protocols

[enterprise][dynamic]
Context:

network

Subcontext:

service

Default:

-all,+TLSv1.2

Introduced:

3.11

Removed:

3.15

Removed, replaced by protocols in the tls sub-stanza as of version 3.15. TLS protocol versions to include.

Additional information

Example:

tls-protocols  -all,+TLSv1.2

security

disable-tls

[enterprise][static]
Context:

security

Subcontext:

ldap

Default:

false

Introduced:

4.1

Whether or not to disable the use of TLS for LDAP server connections

enable-ldap

[enterprise][static]
Context:

security

Default:

false

Introduced:

4.1

Removed:

5.7

Enables LDAP. Refer to the LDAP Configuration documentation for further details.
Requires a feature to be enabled in the feature-key-file.

As of version 5.7, this item is removed, and LDAP is enabled by the presence of an ldap section within the security section of the configuration file.

enable-quotas

[enterprise][static]
Context:

security

Default:

false

Introduced:

5.6

Enables the use of read and write rate quotas to limit transaction rates. Quotas can be added to roles, and users assigned such roles will be restricted according to the associated quotas.
Note that when enable-quotas is true, read and write transaction per second (tps) rates and scan record per second (rps) rates are tracked for all users (even users with no quotas) and can be displayed with the "show users" command in asadm.

enable-security

[enterprise][static]
Context:

security

Default:

false

Removed:

5.7

Enables Access Control (ACL). Refer to Configuring Access Control for further details.

Aerospike Enterprise Edition versions 4.6.0.4 and above, 4.5.3.6, 4.5.2.6, 4.5.1.11 and 4.5.0.15 support enabling ACL through a rolling restart, allowing environments running on the latest Client Libraries (supporting mixed security modes) to turn on ACL without downtime. The AER-6099 improvement was made to allow the System Metadata (SMD) sub-system to support mixed security modes on the server side.

As of version 5.7, this item is removed, and ACL is enabled by the presence of a security section in the configuration file.

Additional information
note

For Aerospike Enterprise Edition versions not having the AER-6099 improvement, enabling ACL requires a cluster shut down.

caution

When configuring enable-security to true with Aerospike Enterprise Server versions 4.6 or newer there are some incompatible Aerospike Clients please refer to the following Knowledge Base article for details to ensure you are using a compatible Aerospike Client version.

When configuring enable-security to true with Cross-Datacenter Replication (XDR) a cluster installed with Aerospike Enterprise Edition Server versions 4.1.0.1 to 4.3.0.6 cannot ship to an Aerospike Enterprise Edition Server version 4.6 or newer. The simplest workaround is to avoid using those incompatible Aerospike Enterprise Edition Server versions 4.1.0.1 to 4.3.0.6. Refer to the following Knowledge Base article for further details.

ldap-login-threads

[enterprise][static]
Context:

security

Default:

8

Introduced:

4.1.0.1

Removed:

5.7

Number of threads to use for LDAP logins.

This parameter was renamed to login-threads and moved from the main security context to the ldap subcontext in version 5.7.

Additional information
note

Allowable range is 1 to 64.

local 0

[enterprise][unanimous] [static]
Context:

security

Subcontext:

syslog

Write to "local0" facility as well as to default syslog file. You can define local0 in /etc/rsyslog.conf.

login-threads

[enterprise][static]
Context:

security

Subcontext:

ldap

Default:

8

Introduced:

5.7

Number of threads to use for LDAP logins.

This parameter was renamed from ldap-login-threads and moved from the main security context to the ldap subcontext in version 5.7.

Additional information
note

Allowable range is 1 to 64.

polling-period

[enterprise][dynamic]
Context:

security

Subcontext:

ldap

Default:

300 (5 minutes)

Introduced:

4.1

How frequently (in seconds) to query the LDAP server for user group membership information. Allowable range is 0 to 86400 (24 hours). Note that a value of 0 means do not poll.

privilege-refresh-period

[enterprise][dynamic]
Context:

security

Default:

300

Frequency in seconds with which the node verifies credentials and permissions for active client connections.

Additional information

Example: Set privilege-refresh-period to 200 dynamically:

asinfo -v "set-config:context=security;privilege-refresh-period=200"
ok

query-base-dn

[enterprise][required][static]
Context:

security

Subcontext:

ldap

Introduced:

4.1

Distinguished name of the LDAP directory entry at which to begin the search when querying for a user's group membership information.

Additional information
note

Certain characters in the value of this parameter must be escaped. See Parameters whose values must be escaped.

query-user-dn

[enterprise][static]
Context:

security

Subcontext:

ldap

Introduced:

4.1

Distinguished name of the user designated for user group membership queries.

Additional information
note

Certain characters in the value of this parameter must be escaped. See Parameters whose values must be escaped.

query-user-password-file

[enterprise][static]
Context:

security

Subcontext:

ldap

Introduced:

4.1

Clear text password of the user specified for user group membership queries. This directive has the following possible formats:

  • file:/path_to/qupwd - the password will be read from file /path_to/qupwd
  • vault:name_of_secret_in_vault - the password will be read from the name of the secret where it is stored in Vault.
  • env:QUPWD - the password will be read from the named environment variable (e.g. QUPWD). (version 5.3+)

In version 5.1+, for the alternative integration with HashiCorp Vault, the value of the configuration parameter must be prefixed with literal vault: and must be followed by the name of the secret on the Vault service. For more information, see Optional security with Vault integration.

Additional information
note

As of version 5.1, the password contents are re-read whenever the password is used.

caution

In version 5.1.0.3, this configuration parameter is dynamic and setting it dynamically may cause a crash.

report-authentication

[enterprise][dynamic]
Context:

security

Subcontext:

syslog

Set to true to report successful authentications in the syslog file.
This parameter is dynamic as of version 5.6.

Additional information

Example: Set the parameter dynamically:

asinfo -v "set-config:context=security;syslog.report-authentication=true"
ok

report-authentication

[enterprise][dynamic]
Context:

security

Subcontext:

log

Set to true to report successful authentications in aerospike.log.
This parameter is dynamic as of version 5.6.

Additional information

Example: Set the parameter dynamically:

asinfo -v "set-config:context=security;log.report-authentication=true"
ok

report-data-op

[enterprise][dynamic]
Context:

security

Subcontext:

syslog

Set this to report on data transactions for a namespace (and optionally a set). Report transactions in the syslog file.
This parameter is dynamic as of version 5.6.

Additional information

Example:

report-data-op {namespace} {set}

Dynamically enable reporting of data operations to the syslog for set 'setA' in namespace 'test':

asinfo -v "set-config:context=security;syslog.report-data-op=true;namespace=test;set=setA"
ok
caution

Setting this for namespaces or sets with medium and higher throughput could significantly degrade overall performance and cause flooding in the logs.

report-data-op

[enterprise][dynamic]
Context:

security

Subcontext:

log

Set this to report on data transactions for a namespace (and optionally a set). Report transactions in aerospike.log.
This parameter is dynamic as of version 5.6.

Additional information

Example:

report-data-op {namespace} {set}

Dynamically enable reporting of data operations to aerospike.log for set 'setA' in namespace 'test':

asinfo -v "set-config:context=security;log.report-data-op=true;namespace=test;set=setA"
ok
caution

Setting this for namespaces or sets with medium and higher throughput could significantly degrade overall performance and cause flooding in the logs.

report-data-op-role

[enterprise][dynamic]
Context:

security

Subcontext:

syslog

Introduced:

5.6

Set this to report on data transactions for all users having a given role. Report transactions in the syslog file.

Additional information

Example: Enable reporting of data operations by all users having the 'billing' role:

report-data-op-role billing

Dynamically disable reporting of data operations to the syslog by all users having the 'billing' role:

asinfo -v "set-config:context=security;syslog.report-data-op=false;role=billing"
ok
caution

Setting this for roles with medium and higher throughput could significantly degrade overall performance and cause flooding in the logs.

report-data-op-role

[enterprise][dynamic]
Context:

security

Subcontext:

log

Introduced:

5.6

Set this to report on data transactions for all users having a given role. Report transactions in aerospike.log.

Additional information

Example: Enable reporting of data operations by all users having the 'billing' role:

report-data-op-role billing

Dynamically disable reporting of data operations to aerospike.log by all users having the 'billing' role:

asinfo -v "set-config:context=security;log.report-data-op=false;role=billing"
ok
caution

Setting this for roles with medium and higher throughput could significantly degrade overall performance and cause flooding in the logs.

report-data-op-user

[enterprise][dynamic]
Context:

security

Subcontext:

syslog

Introduced:

5.6

Set this to report on data transactions for a given user. Report transactions in the syslog file.

Additional information

Example: Enable reporting of data operations by user 'charlie':

report-data-op-user charlie

Dynamically enable reporting of data operations by user 'fred':

asinfo -v "set-config:context=security;syslog.report-data-op=true;user=fred"
ok
caution

Setting this for users with medium and higher throughput could significantly degrade overall performance and cause flooding in the logs.

report-data-op-user

[enterprise][dynamic]
Context:

security

Subcontext:

log

Introduced:

5.6

Set this to report on data transactions for a given user. Report transactions in aerospike.log.

Additional information

Example: Enable reporting of data operations by user 'charlie':

report-data-op-user charlie

Dynamically enable reporting of data operations by user 'fred':

asinfo -v "set-config:context=security;log.report-data-op=true;user=fred"
ok
caution

Setting this for users with medium and higher throughput could significantly degrade overall performance and cause flooding in the logs.

report-sys-admin

[enterprise][dynamic]
Context:

security

Subcontext:

syslog

Set to true to report systems administration operations in the syslog file.
This parameter is dynamic as of version 5.6.

Additional information

Example: Set the parameter dynamically:

asinfo -v "set-config:context=security;syslog.report-sys-admin=true"
ok

report-sys-admin

[enterprise][dynamic]
Context:

security

Subcontext:

log

Set to true to report systems administration operations in aerospike.log.
This parameter is dynamic as of version 5.6.

Additional information

Example: Set the parameter dynamically:

asinfo -v "set-config:context=security;log.report-sys-admin=true"
ok

report-user-admin

[enterprise][dynamic]
Context:

security

Subcontext:

syslog

Set to true to report successful user administration operations in the syslog file.
This parameter is dynamic as of version 5.6.

Additional information

Example: Set the parameter dynamically:

asinfo -v "set-config:context=security;syslog.report-user-admin=true"
ok

report-user-admin

[enterprise][dynamic]
Context:

security

Subcontext:

log

Set to true to report successful user administration operations in aerospike.log.
This parameter is dynamic as of version 5.6.

Additional information

Example: Set the parameter dynamically:

asinfo -v "set-config:context=security;log.report-user-admin=true"
ok

report-violation

[enterprise][dynamic]
Context:

security

Subcontext:

syslog

Set to true to report security violations in the syslog file.
This parameter is dynamic as of version 5.6.

Additional information

Example: Set the parameter dynamically:

asinfo -v "set-config:context=security;syslog.report-violation=true"
ok

report-violation

[enterprise][dynamic]
Context:

security

Subcontext:

log

Set to true to report security violations in aerospike.log.
This parameter is dynamic as of version 5.6.

Additional information

Example: Set the parameter dynamically:

asinfo -v "set-config:context=security;log.report-violation=true"
ok

role-query-base-dn

[enterprise][static]
Context:

security

Subcontext:

ldap

Default:

query-base-dn value is used.

Introduced:

4.1

If specified uses this value as the base dn when performing the role queries.

Additional information
note

Certain characters in the value of this parameter must be escaped. See Parameters whose values must be escaped.

role-query-pattern

[enterprise][required][static]
Context:

security

Subcontext:

ldap

Introduced:

4.1

Format for the search filter to use when querying for a user's group membership information. The substitutions for username, ${un}, and distinguished name, ${dn} will be replaced by the actual username and the actual user’s full distinguished name when constructing the search filter. If needed, multiple role-query-pattern strings can be specified separately and each will be tried in order when querying for a user's information

role-query-search-ou

[enterprise][static]
Context:

security

Subcontext:

ldap

Default:

false

Introduced:

4.1

Whether to look for a user's group membership information in the organizational unit entries of the user's LDAP distinguished name

server

[enterprise][required][static]
Context:

security

Subcontext:

ldap

Introduced:

4.1

Name of the LDAP server to use. Multiple servers can be specified via a comma-delimited string without white-space.

session-ttl

[enterprise][dynamic]
Context:

security

Default:

86400

Introduced:

5.7

Lifetime in seconds of an access token. A TCP connection attempt with an expired token will fail, and the client must log in again to get a fresh token. Allowable range is 120s (2 minutes) to 864000s (10 days). The server actually sets the expiry a minute shorter than the expiration time (renewal margin). The clients therefore refresh the token one minute prior to its actual set expiry.

This parameter was moved out of the ldap subcontext into the main security context in version 5.7.

session-ttl

[enterprise][dynamic]
Context:

security

Subcontext:

ldap

Default:

86400

Introduced:

4.1

Removed:

5.7

Lifetime (in seconds) of an access token. A TCP connection attempt with an expired token will fail, and the client must log in again to get a fresh token. Allowable range is 120 (2 minutes) to 864000 (10 days). The server sends a time stamp one minute shorter for the clients to refresh to avoid getting too close to the time it expires.

This parameter was moved out of the ldap subcontext into the main security context in version 5.7.

syslog-local

[enterprise][static]
Context:

security

Subcontext:

syslog

Default:

1

Introduced:

3.3.13

Local syslog file to log to.

Additional information
note

The default value is -1, which means no logging.

Allowable range is 0 to 7.

The precise use of the syslog facilities local0 through local7 depends on your syslog implementation.

tls-ca-file

[enterprise][required][static]
Context:

security

Subcontext:

ldap

Introduced:

4.1

Path to the CA certificate file used for validating TLS connections to the LDAP server. Includes filename, e.g. /path/to/CA/cert/filename.

Additional information
note

May not be specified if disable-tls is set to true.

token-hash-method

[enterprise][static]
Context:

security

Subcontext:

ldap

Default:

sha-256

Introduced:

4.1

Hash algorithm to use when generating the HMAC for access tokens. Currently supported algorithms are sha-256 and sha-512.

tps-weight

[enterprise][dynamic]
Context:

security

Default:

2

Introduced:

5.6

A number indicating how much smoothing to do when maintaining transactions per second (tps) values for enforcing quotas. Smoothing makes the system less responsive to brief spikes in transaction rates, so that the more smoothing is used, the less likely it is that a brief spike in transactions above a user's quota will result in a violation. The allowable range is 2 (least smoothing) to 20 (most smoothing).

Additional information

The tps rates are computed every second as exponential moving averages, and a tps-weight of N means that the previous tps value is given (N-1) times the weight of the observed tps over the most recent second when performing the computation. The computation looks like:

tps = (((tps-weight - 1) * tps) + transactions_during_last_second) / tps-weight

So for example, with a tps-weight of 5, the computation would be:

tps = ((4 * tps) + transactions_during_last_second) / 5

Example: Set tps-weight to 8 dynamically:

asinfo -v "set-config:context=security;tps-weight=8"
ok

user-dn-pattern

[enterprise][static]
Context:

security

Subcontext:

ldap

Introduced:

4.1

Format for the distinguished name of the LDAP directory entry to use when binding to the LDAP server for user authentication. ${un} should be placed in this string to specify where the user ID is inserted when constructing the distinguished name.

Additional information
note

Either this option or user-query-pattern is required. Certain characters in the value of this parameter must be escaped. See Parameters whose values must be escaped.

user-query-pattern

[enterprise][static]
Context:

security

Subcontext:

ldap

Introduced:

4.1

Format for the search filter to use when querying for a user's distinguished name.${un} should be placed in this string to specify where the user ID is inserted when constructing the distinguished name.

Additional information
note

Either this option or user-dn-pattern is required. As of version 5.1, Aerospike Server escapes certain illegal characters in the user DN returned by the LDAP server before making role queries for the user. Previous versions would fail querying the LDAP server if such characters are present. The characters that are escaped are as follows:

  • *
  • (
  • )
  • \
  • /
  • space

service

[enterprise][dynamic]
Context:

service

Default:

false

Introduced:

3.10

Requires heartbeat v3. Set to true in order enable IPv6.

allow-inline-transactions

[dynamic]
Context:

service

Default:

true

Removed:

3.11

By default, in memory read transactions are handled by the service threads directly rather than being offloaded to a transaction queue and then picked up by a transaction thread. Setting allow-inline-transactions to false will force all transactions to be dispatched to a transaction queue. This can help in environments with limited network queues and/or service threads.

Additional information

Example: To disable inline transactions across a cluster: asadm -e "enable; asinfo -v 'set-config:context=service;allow-inline-transactions=false'"

auto-pin

[static]
Context:

service

Default:

none

Introduced:

3.12

This configuration controls the different options for CPU pinning. When using this configuration with Aerospike versions before 4.7, neither service-threads, nor transaction-queues may be configured in the configuration file; both will default to the number of CPUs. With Aerospike 4.7+, service-threads can be configured, but must be a multiple of the number of CPUs, if this configuration is in effect. Possible values are:

  • none - relying on Linux's irqbalance.
  • cpu - CPU pinning - Aerospike controls the interrupt affinity of all NIC queue interrupts.
  • numa - CPU and NUMA pinning - restrict memory and CPU usage of asd to a single NUMA node.
  • adq - Application Device Queue pinning - Aerospike dispatches a client request to a CPU based on the NIC queue associated with the corresponding client network connection. Requires an ADQ-enabled NIC and manual configuration of the NIC. Introduced in 4.7.
Additional information

cpu and numa require Linux kernel 3.19+. This is the default for Ubuntu 15.04+ and Debian 9+, but not CentOS 7 (3.10). If necessary, the Linux kernel can be upgraded. adq requires Linux kernel 4.12+. When moving away from any auto-pinning, a reboot is required to restore the system defaults for interrupts. When setting auto-pin to cpu, Aerospike versions before 4.7 don't allow transaction-queues and service-threads to be set in the configuration file; both will be forced to the number of CPUs - which is also the default in Aerospike versions 3.12+. Aerospike versions 4.7+ allow setting service-threads, but require the configured number to be a multiple of the number of CPUs. Contact Aerospike Support for recommendations and benchmark details prior to using these configurations.

note

The network interface hardware should support MSI. MSI sends interrupts from a peripheral device (e.g., a NIC) to the CPU via the PCI bus. Older hardware had dedicated lines for that, so any data exchange between the CPU and the device went via the PCI bus and interrupts were handled via a separate path, out of band. But in recent days everything goes through the PCI bus. Network interfaces not supporting MSI would assert with the following:

FAILED ASSERTION (hardware): (hardware.c:1087) interface eth0 does not support MSIs

It is also necessary for the ratio of NIC queues to CPU cores be greater than 1/4. The following message would otherwise be logged on the console and the server would not start:

WARNING (hardware): (hardware.c:1605) eth0 has very few NIC queues; only 8 out of 32 CPUs handle(s) NIC interrupts

batch-index-threads

[dynamic]
Context:

service

Default:

#cpu

Introduced:

3.6.0

Number of batch index response worker threads. In version 3.12 and later this is set by default to the number of CPU cores available. In previous versions, the default is 4. Each thread has its own queue. These threads only handle sending back batch response buffers to the client via sockets. Setting this parameter to 0 disables batch commands. Config file value range: 1-256 (a value of 0 can be set dynamically).

Additional information

Example: Set batch-index-threads to 16 dynamically:

asinfo -v "set-config:context=service;batch-index-threads=16"
ok
tip

Versions prior to 3.12 allowed a max value of 64.

batch-max-buffers-per-queue

[dynamic]
Context:

service

Default:

255

Introduced:

3.6.0

Number of 128 KiB response buffers allowed in each batch index queue before it is marked as full. A batch index queue (one per batch-index-threads) can have more than batch-max-buffers-per-queue buffers but it will not receive any new batch until it gets below that number. When all queues are above the batch-max-buffers-per-queue new batch requests will be rejected and an error will be logged on the server: Failed to find active batch queue that is not full.

Additional information

Example: Set batch-max-buffers-per-queue to 512 dynamically:

asinfo -v "set-config:context=service;batch-max-buffers-per-queue=512"
ok

batch-max-requests

[dynamic]
Context:

service

Default:

5000

Max number of keys allowed per node.

Additional information

Example: Set batch-max-requests to 6000 dynamically:

asinfo -v "set-config:context=service;batch-max-requests=6000"
ok

batch-max-unused-buffers

[dynamic]
Context:

service

Default:

256

Introduced:

3.6.0

Max number of 128 KiB response buffers allowed in buffer pool. If the limit is reached, completed buffers will be destroyed at the end of the batch request. For large batch workloads, it may be advisable to increase this configuration parameter to avoid unnecessary destruction and recreation of buffers, which would impact CPU load.

Additional information

Example: Set batch-max-unused-buffers to 512 dynamically:

asinfo -v "set-config:context=service;batch-max-unused-buffers=512"
ok

batch-priority

[dynamic]
Context:

service

Default:

200

Removed:

4.4

Number of sequential commands before yielding. A higher number gives a higher priority. Only applies to old batch direct protocol, which was removed in version 4.4.

Additional information

Example: Set batch-priority to 300 dynamically:

asinfo -v "set-config:context=service;batch-priority=300"
ok

batch-threads

[dynamic]
Context:

service

Default:

4

Removed:

4.4

Number of batch direct worker threads. Batch direct is the old batch protocol. These threads process the full batch requests. There is one batch queue for all batch threads. Value range: 0-256. Note the old batch direct protocol was removed in version 4.4.

Additional information

Example: Set batch-threads to 8 dynamically:

asinfo -v "set-config:context=service;batch-threads=8"
ok
tip

Versions prior to 3.12 allowed a max value of 64.

batch-without-digests

[dynamic]
Context:

service

Default:

true

Introduced:

4.9

Removed:

6.0

If set to true, digests are not included in batch responses.

Note that the default value is true as of server version 5.7. In earlier server versions the default value is false.

Additional information

Example: Dynamically set batch-without-digests to false:

asinfo -v "set-config:context=service;batch-without-digests=false"
ok
note

To use batch-without-digests, the minimum client versions required are as follows:

  • Java client version 4.4.5
  • C client version 4.6.6
  • C# client version 3.9.0
  • PHP client version 7.4.2.
  • Python client version 3.9.0.
  • Node.js client version 3.12.0

cluster-name

[dynamic]
Context:

service

Default:

null

Introduced:

3.10 (hb v3)

Only available with heartbeat v3 as of version 3.10. If set, a node can only join a cluster with a matching cluster-name. Clients providing a cluster name can only connect to a cluster matching its name.

Additional information

Example: Set the cluster-name to payments dynamically:

asinfo -v "set-config:context=service;cluster-name=payments"
ok

debug-allocations

[static]
Context:

service

Default:

none

Introduced:

3.14

Options for debugging memory allocations on the server.

Additional information
  • none - Feature not enabled.

  • transient - Feature enabled only for transient allocations - 'overhead' memory that is not record data or metadata.

  • persistent - Feature enabled only for persistent allocations - memory that is record data or metadata.

  • all - Feature enabled for all allocations.

note

When debug-allocations is enabled, the server will assert on detection of overwrites and (some) double frees. Also, each tracked allocation will incur a cost of 4 extra bytes.

For more complete debugging of double frees, also enable indent-allocations.

caution

When running with debug-allocations enabled for an extended time period (typically many months, though possibly sooner if using scans frequently with server 4.7 or later), internal memory tracking resources can eventually become exhausted. With older Aerospike servers (3.14 through 4.4; 4.5.0 versions prior to 4.5.0.19; 4.5.1 versions prior to 4.5.1.15; 4.5.2 versions prior to 4.5.2.10; 4.5.3 versions prior to 4.5.3.10; 4.6 versions prior to 4.6.0.8; 4.7 versions prior to 4.7.0.5), this condition leads to a crash. With newer Aerospike servers (4.5.0 versions 4.5.0.19 or newer; 4.5.1 versions 4.5.1.15 or newer; 4.5.2 versions 4.5.2.10 or newer; 4.5.3 versions 4.5.3.10 or newer; 4.6 versions 4.6.0.8 or newer; 4.7 versions 4.7.0.5 or newer), this condition simply results in the inability to detect any further memory leaks.

defrag-queue-escape

[dynamic]
Context:

service

Default:

10

Removed:

3.3.17

Max time (milliseconds) the defrag thread can sleep.

Additional information
note

Increase this number to slow down defrag and decrease this to speed them up.

defrag-queue-hwm

[dynamic]
Context:

service

Default:

500

Removed:

3.3.17

Write throughput limit beyond which defrag will pause. When breached, defrag will pause until write throughput gets below defrag-queue-lwm or 'defrag-queue-escape' milliseconds has elapsed, whichever happens first.

Additional information
note

Idea is to make defrag slow when writes are high. Increasing this will give a slight increase in defrag rate.

defrag-queue-lwm

[dynamic]
Context:

service

Default:

1

Removed:

3.3.17

Write throughput limit below which the defrag will resume after 'defrag-queue-hwm' has been breached.

Additional information
note

Idea is to wake up defrag when the normal writes are less. This has to be lower than defrag-queue-hwm. Increasing this will give you a slight increase in defrag rate.

defrag-queue-priority

[dynamic]
Context:

service

Default:

1

Removed:

3.3.17

Priority of the defragmentation thread. A higher number will slow down defragmentation.

Additional information
note

This is the number of milliseconds to wait in between processing each block that is being defragmented. It can be set to 0 to further speed up defragmentation but impact on disk i/o should be carefully monitored.

disable-udf-execution

[static]
Context:

service

Default:

false

Introduced:

4.5.3.21

Completely disallow the execution of User-Defined Functions (UDFs).

Additional information

Example: Disable UDF execution:

service {
...
disable-udf-execution true
...
}
note

Available starting with the following versions: 5.1.0.6, 5.0.0.7, 4.9.0.10, 4.8.0.13, 4.7.0.17, 4.6.0.19, 4.5.3.21.

downgrading

[enterprise][dynamic]
Context:

service

Introduced:

5.4.0.3, 5.3.0.8, 5.2.0.17

Used in conjunction with downgrades from version 5.2 or newer (where XDR bin shipping has been used) to pre-5.2, or from version 5.4 or newer (where XDR bin convergence has been used) to 5.3 or 5.2. When set true before downgrading, ensures record compatibility when sending records from nodes with the newer server version to nodes with the older version.

Note this parameter can only be set dynamically.

Additional information

Example: Set the parameter true:

asinfo -v "set-config:context=service;downgrading=true"
ok

dump-message-above-size

[static]
Context:

service

Default:

134217728

Removed:

3.7.2

Size in bytes above which a received message will be printed in the logs.

enable-benchmarks-fabric

[dynamic]
Context:

service

Default:

false

Introduced:

3.9

Enable histograms for fabric. Refer to the Histograms from Aerospike Logs page for details.

Additional information

Here is the list of configuration enabled histograms:

Example: Set enable-benchmarks-fabric to true:

asinfo -v 'set-config:context=service;enable-benchmarks-fabric=true'
ok

enable-benchmarks-svc

[dynamic]
Context:

service

Default:

false

Introduced:

3.9

Removed:

4.8

Enable histograms for demarshal and transaction queue related operations. Refer to the Histograms from Aerospike Logs page for details. Removed in 4.8, after the removal of transaction queues in 4.7 made the queue histogram irrelevant and left the demarshal histogram almost equivalent to the "*-start" histograms for the different types of transactions: ops-sub-start, read-start, write-start, udf-start, udf-sub-start, and batch-sub-prestart.

Additional information

Here is the list of configuration enabled histograms:

Example: Set enable-benchmarks-svc to true:

asinfo -v 'set-config:context=service;enable-benchmarks-svc=true'
ok

enable-health-check

[dynamic]
Context:

service

Default:

false

Introduced:

4.3.1.3

Monitors the health of a cluster and attempts to identity potential outlier nodes. Helpful if there is a suspicion of a node under performing and impacting the overall cluster. This does not replace regular monitoring and alerting for a cluster but rather augments it. This has to be explicitly enabled on all the nodes for best results. Refer to the health-stats and health-outliers commands.

Additional information

Example: Set the enable-health-check to true dynamically:

asinfo -v "set-config:context=service;enable-health-check=true"
ok
note

The statistics monitored are divided into cluster stats and local stats.
Cluster statistics monitored consist of fabric connections opened, number of node arrivals, number of proxy requests and replica latency.
Local statistic monitored consists of device read latency.

enable-hist-info

[dynamic]
Context:

service

Default:

false

Introduced:

3.9

Enable histograms for info protocol transactions. Refer to the Histograms from Aerospike Logs page for details.

Additional information

Here is the list of configuration enabled histograms:

Example: Set enable-hist-info to true:

asinfo -v 'set-config:context=service;enable-hist-info=true'
ok

enforce-best-practices

[static]
Context:

service

Default:

false

Introduced:

5.7

If enforce-best-practices is set to true then Aerospike will fail to startup if any of the checked best-practices are violated. When set to false, Aerospike will continue to startup and also log a warning for each failed best-practice as well as set the failed_best_practices metric to true and add the name of the failed best-practices to the output of the best-practices info command.

fabric-workers

[static]
Context:

service

Default:

16

Removed:

3.11.1.1

Number of fabric threads for inter-node communication. Replaced with channel specific settings: channel-*-recv-threads.

Additional information
note

This can be set to a maximum of 128 for server versions 3.4.1 and above. It has a maximum of 64 for older releases of 3.x and for 2.x.

feature-key-file

[enterprise][required][static]
Context:

service

Default:

/etc/aerospike/features.conf

Introduced:

4.0(optional) 4.6(required)

Location of the digitally signed feature key file containing the features that are enabled, for example the strong consistency mode introduced in version 4.0.

As of version 4.6, this file is required for all Enterprise Edition server nodes, whether an optional feature such as strong consistency is enabled or not.

As of version 5.5, multiple feature-key-file directives can be specified (up to 32 of them, which must be unique), allowing enabled features to be specified from multiple sources.

This directive has the following possible formats:

  • /path_to/featuresfile - the feature key information will be read from the file located at /path_to/featuresfile

As of version 5.5, this path can also specify a directory, in which case all files within the directory must be feature key files.

Additional information
note

The feature key expiration date is only checked at startup. The Aerospike server will continue to run after a feature key expires, but will fail to start/restart with an expired feature key.

caution

Enterprise Licensees currently not using a feature key file should contact Aerospike Support prior to upgrading to version 4.6 or above, in order to get their feature key file.

generation-disable

[dynamic]
Context:

service

Default:

false

Removed:

3.10

Totally disables generation checking.

Additional information
note

Used to override the client API options passed to the server.

group

[static]
Context:

service

Group to run as.

hist-track-back

[static]
Context:

service

Default:

300

Removed:

5.1.0

Total time span in seconds over which to cache data. This serves as a flag to enable/disable histograms. The reported track-back value can change at the generated namespace hist-track-back due to rounding based on the slice size. When the histogram is started, its number of rows is computed based on integer division of 'back' / 'slice'. And while the number of rows and the slice size are stored with the histogram, the back size is not – it is recomputed from (# rows) * (slice size) when being reported. So when the histogram is started, 'back' is effectively rounded down to the nearest multiple of 'slice', and corresponds to the actual time window tracked by the histogram.

hist-track-slice

[static]
Context:

service

Default:

10

Removed:

5.1.0

Period in seconds at which to cache histogram data.

hist-track-thresholds

[static]
Context:

service

Default:

1,8,64

Removed:

5.1.0

Comma-separated bucket (ms) values to track, must be powers of 2. For example : 1,4,16,64.

Additional information

Example:

hist-track-thresholds 1,2,4,8,16,32,64,128,256,512

indent-allocations

[static]
Context:

service

Default:

false

Introduced:

4.6

Extra option for debug-allocations which enables detection of all double frees.

Additional information
note

When indent-allocations is enabled, the server will assert on detection of overwrites and all double frees. Also, each tracked allocation will incur a cost of 256 extra bytes.

info-threads

[dynamic]
Context:

service

Default:

16

Number of threads to create to process info requests. This configuration is static in releases prior to 4.5.2. Maximum allowed value is 256 for server versions 4.5.2 and above. Value range: 1-256.

Additional information

Example: Set info-threads to 8 dynamically:

asinfo -v "set-config:context=service;info-threads=8"
ok

keep-caps-ssd-health

[static]
Context:

service

Default:

false

If true, enables non-root Aerospike users to keep permissions necessary to report (NVMe) device health. Currently, only 'age' is returned.

log-local-time

[static]
Context:

service

Default:

false

Introduced:

3.7.0.1

By default, Aerospike server logs have a time stamp in GMT. Set this configuration to true to set logs to have local time stamp (also displays an offset to GMT).

Example: Dec 12 2015 18:52:39 GMT-0800: INFO (as): (as.c::494) service ready: soon there will be cake!

log-millis

[static]
Context:

service

Default:

false

Introduced:

3.13

Set this to true in order to get millisecond timestamps in the log file.

microbenchmarks

[dynamic]
Context:

service

Default:

false

Removed:

3.9

Enable microbenchmark for additional logging, in order to investigate complex issues/slow transactions. Removed in 3.9, and replaced with namespace level configuration enabled benchmarks. Refer to the Histograms from Aerospike Logs page for details.

Additional information

Here is the list of configuration enabled histograms:

microsecond-histograms

[dynamic]
Context:

service

Default:

false

Introduced:

5.1

Set the granularity of histograms to microseconds instead of the default milliseconds. For the auto enabled histograms, this configuration is static and nodes have to be restarted.

Additional information

Example: Set microsecond histograms to true:

asinfo -v "set-config:context=service;microsecond-histograms=true"
ok
note

Histogram time unit cannot be changed while they are being written to the log file. For auto-enabled-benchmarks which are always written to the log, a node restart is necessary to switch to microseconds. For configuration enabled benchmark histograms, it is necessary to turn those benchmark histograms off prior to dynamically changing the microsecond-histograms setting. Benchmark histograms are all the benchmarks that can be enabled through an enable-benchmarks-xxx configuration parameter. For example, enable-benchmarks-read or enable-benchmarks-write. Refer to the full list on the latency monitoring page or on this configuration reference manual.

migrate-fill-delay

[enterprise][dynamic]
Context:

service

Default:

0

Introduced:

4.3.1

Number of seconds to delay before starting 'fill' migrations. For Available mode (AP), fill migrations are migrations that are going to a node that didn't own a partition to be migrated. For strong-consistency, these are migrations that are going to a non-roster-replica. These migrations aren't necessary if the cluster state is transient (normal case) -- when the cluster state is restored this migrated data would eventually be dropped. This setting doesn't affect 'lead migrations' indicated by migrate_tx_partitions_lead_remaining. Refer to the Delaying "Fill" Migrations page for further details.

Additional information

Example: To enable a one hour fill delay across the cluster (to be changed in the configuration file as well since a restart will revert such dynamic change): asadm -e "enable; asinfo -v 'set-config:context=service;migrate-fill-delay=3600'"

note

For versions 5.2+, can be overridden for a namespace with the dynamic parameter ignore-migrate-fill-delay.

For versions 4.5.0.2 and earlier, using time units (m, h, d) does not work when setting this configuration parameter dynamically.

For strong-consistency enabled namespaces, when quiescing, the migrate-fill-delay will only start 'counting' after the node is stopped.

When increasing the migrate-fill-delay time, the extension applies from the initial point the migrations would have started.

The migrate-fill-delay time is reset on any cluster change (cluster_size changing). For example, a full cluster shutdown with each node configured with a 1 hour delay (in the configuration file) will prevent 'fill' migrations from happening upon restarting of the cluster as long as there is at least 1 node re-joining the cluster every 1 hour (the cluster-stable command can be used to check that nodes are re-joining the cluster and the migrate-fill-delay can be dynamically updated if necessary).

For use cases taking advantage of this setting, it is a good practice to set migrate-fill-delay in the configuration file to ensure that fill migrations do not kick in during a rolling restart which would reset any dynamically set parameter.

caution

For Available mode (AP), if a stopped node either had its storage deleted or is configured to have an in-memory only namespace and wasn't quiesced and fully migrated before being stopped, then the period of time where the cluster is unable to satisfy the durability requirement set by the replication-factor configuration is extended by the migrate-fill-delay. Migrations will not start until the delay is up, or manually set to 0.

migrate-max-num-incoming

[dynamic]
Context:

service

Default:

4

Maximum number of partitions a node can be receiving records from at any given time. Default lowered from 256 to 4 as of version 3.10.1. This limits potential congestion on a given node, especially in situations were a node is added in an cluster. Can be cautiously increased in order to speed up migrations. Refer to manage migrations for further details.

Additional information

Example: Set migrate-max-num-incoming to 8 dynamically:

asinfo -v "set-config:context=service;migrate-max-num-incoming=8"
ok
tip

For versions 3.13 to 3.15, the maximum value is limited to 64 and for versions 3.16.0.1 and above, the maximum is 256.

caution

Having higher allowed number of incoming partitions during migrations can, in some cases, adversely impact performance (especially when coupled with higher number of migrate-threads) and even cause unexpected bottleneck that would require restarting of nodes with a lower value. It is recommended to cautiously increase this parameter while monitoring network and disk io for potential bottlenecks. Decreasing this value will only take effect after threads that are processing data have completed (full partition at a time).

migrate-read-priority

[dynamic]
Context:

service

Default:

10

Removed:

3.7.5

Disk i/o throttle for data migration. Number of records to read before sleeping for 'migrate-read-sleep' milliseconds.

Additional information
note

Setting this to 0 will disable this throttling knob.

migrate-read-sleep

[dynamic]
Context:

service

Default:

500

Removed:

3.7.5

Time to sleep, in milliseconds, when migrate-xmit-priority is reached. Will not sleep if migrate-xmit-priority is set to 0.

migrate-threads

[dynamic]
Context:

service

Default:

1

Number of threads per server allocated for data migration. Each thread will migrate one partition at a time. Increasing this parameter should be done with caution. Refer to manage migrations for further details. Value range: 0-100. Not dynamic before 3.2.0.

Additional information

Example: Set migrate-threads to 2 dynamically:

asinfo -v "set-config:context=service;migrate-threads=2"
ok
note

Decreasing this value will only take effect after threads that are processing data have completed (full partition at a time).

migrate-xmit-hwm

[dynamic]
Context:

service

Default:

10

Removed:

3.7.5

Used to throttle network i/o during migrations by limiting the number of 'in-flight' records

Additional information
tip

Increasing this will speed up migrations.

caution

High values may impact transaction latencies.

migrate-xmit-lwm

[dynamic]
Context:

service

Default:

5

Removed:

3.7.5

Resumes migrations when the number of 'in-flight' records fall bellow this configuration's value.

migrate-xmit-priority

[dynamic]
Context:

service

Default:

40

Removed:

3.7.5

Number of records to ship before sleeping for 'migrate-xmit-sleep' milliseconds.

Additional information
note

Setting this to 0 will disable this throttling knob.

migrate-xmit-sleep

[dynamic]
Context:

service

Default:

500

Removed:

3.7.5

Time to sleep, in milliseconds, when migrate-xmit-priority is reached. Will not sleep if migrate-xmit-priority is set to 0.

min-cluster-size

[dynamic]
Context:

service

Default:

1

The minimum number of nodes required for a cluster to form. Necessary when configured with index-type flash to avoid running out of resources in case of cluster splits.

Additional information

Example: Set min-cluster-size dynamically to 6

asinfo -v "set-config:context=service;min-cluster-size=6"
ok
note

When running in strong-consistency mode, if the desired min-cluster-size represents less than half the total number of nodes in the cluster, the min-cluster-size should not be configured. Indeed, minority sub-clusters make all partitions unavailable except the ones for which all the replicas are in the sub-cluster, so there is no new partition ownership, and no increase in index device space or DRAM required. This serves the same purpose as configuring min-cluster-size, but is better since there will be some availability in the sub-cluster. (If min-cluster-size is configured in such cases, eventually the nodes in the sub-cluster that can't form will make everything unavailable.)

Also, for Available mode (AP) namespaces the replication factor drops to 1 when a 1-node sub-cluster forms. So e.g. with replication factor 2, if min-cluster-size is not configured, a 1-node sub-cluster is no worse than a 2-node sub-cluster in terms of the resources required. Of course, large cluster, it is necessary to configure min-cluster-size significantly higher than 2 or 3.

There are other less common situations where configuring min-cluster-size can help. For example, to prevent a fresh node not able to join a cluster to claim ownership of all partitions (for example issues resolving DNS in the cloud) or when running across multiple racks to prevent a single rack to form its own cluster if it separates from the other racks.

node-id

[static]
Context:

service

Default:

N/A

Introduced:

3.16.0.1

Allows specifying the node-id of the node as a 1 to 16 character (in hexadecimal), in order to make it friendlier or to influence the partition distribution which is based off the cluster's node ids. By default, Aerospike derives the node-id from the configured fabric port and one of the server's network interface mac address (or, if configured, the mac address of the node-id-interface).

Additional information

Example:

  service {
<...>
node-id a1
<...>
}
note

Node IDs can be changed one node at a time in a rolling fashion across a cluster.

tip

Explicitly specifying the node ID is useful when leveraging a shadow device configuration that is network attached (for example an EBS volume on AWS) which would be re-attached against a different instance which by default would have a different node id than the original one and hence causing more migrations.

It is also useful for having human readable names to refer to different nodes in a cluster as well as configuring strong-consistency enabled namespaces roster information.

caution

Changing the node-id in a strong-consistency enabled namespace would require re-setting the roster and should be done cautiously to avoid any availability and/or consistency impact.

As of version 3.16.0.1, a cluster will not accept 2 nodes with the same node-id. Having 2 nodes with the same node-id in a cluster would lead to erroneous and unexpected behavior. In particular, cluster size and data location would be incorrect and would result in poor performing and unusual data responses.

The configuration file options node-id and node-id-interface are mutually exclusive.

node-id-interface

[static]
Context:

service

Introduced:

3.10

The name of the interface to generate the 'Node ID' from. To be used instead of network-interface-name as of version 3.10 for the 'Node ID' generation part. The 'Node ID' is used in the determination of the succession list for partitions assignments across nodes in a cluster.

Additional information
caution

The configuration file options node-id and node-id-interface are mutually exclusive.

nsup-delete-sleep

[dynamic]
Context:

service

Default:

100

Introduced:

3.4.0

Removed:

4.5.1

Removed with version 4.5.1, as nsup no longer uses delete transactions. Number of microseconds to sleep between generating delete transactions.

Additional information

Example: Set nsup-delete-sleep to 50 microseconds dynamically:

asinfo -v "set-config:context=service;nsup-delete-sleep=50"
ok
note

For versions prior to 3.5.9, the default value is 0 microseconds which might lead to large number of objects queued up in set-delete.

nsup-period

[dynamic]
Context:

service

Default:

120

Removed:

4.5.1

The interval (secs) at which expiration/eviction thread (namespace supervisor) wakes up.

As of version 4.3.0, setting nsup-period to a value of 0 will disable namespace supervisor for all namespaces.

Additional information

Example: Set nsup-period to 60 seconds dynamically:

asinfo -v "set-config:context=service;nsup-period=60"
ok
note

If nsup-period is dynamically set to zero while nsup is working, nsup finishes its current cycle and then becomes dormant. For additional discussion, see Namespace Data Retention Configuration.
Moved to namespace context as of version 4.5.1.

tip

On a system with a high number of expired or deleted records, this can be safely lowered to 60 or 30 seconds.

nsup-queue-escape

[dynamic]
Context:

service

Default:

10

Removed:

3.4.0

Max time (milliseconds) the expiration/eviction thread can sleep.

Additional information
tip

Increase this number to slow down expiration/eviction and decrease this to speed them up.

nsup-queue-hwm

[dynamic]
Context:

service

Default:

500

Removed:

3.4.0

Flow control for expiration/eviction on a namespace.

Additional information
tip

Increase this number to speed up evictions and lower this to slow them down.

nsup-queue-lwm

[dynamic]
Context:

service

Default:

1

Removed:

3.4.0

Flow control for expiration/eviction on a namespace.

Additional information
note

This has to be lower than nsup-queue-hwm.

tip

Increase to speed up evictions and decrease to slow them down.

nsup-startup-evict

[static]
Context:

service

Default:

true

Removed:

4.3.0

Do evictions (not expiration) at boot time also in case the memory limits are breached.

object-size-hist-period

[dynamic]
Context:

service

Default:

3600

Introduced:

4.2.0.2

Removed:

4.5.1

Moved to namespace context as of version 4.5.1, and renamed nsup-hist-period.
The interval (secs) at which the object size histograms are updated.
As of version 4.3.0, setting object-size-hist-period to a value of 0 will disable object size histogram updates. Refer to the histogram info command for further details on the object size histogram.

Additional information
note

If object-size-hist-period is set to zero dynamically, subsequent info commands to get an object size histogram will, if any exist, return the last histogram generated.

os-group-perms

[static]
Context:

service

Default:

false

Introduced:

5.6

When set true, group read/write permissions are added to files created by the service.

Examples of affected files include storage files, system metadata (SMD) files, and log files.

paxos-max-cluster-size

[unanimous] [static]
Context:

service

Default:

32

Removed:

3.10 (hb v3) / 3.14.0

Removed for heartbeat v3 in version 3.10 and above. Maximum number of nodes allowed in cluster. Can be set to a maximum of 127 when setting up a new cluster. The default value 32 allows for a maximum number of 31 nodes (to avoid unnecessary usage of network bandwidth). Refer to this page to know more on how to increase the default for an already running cluster: Increase Maximum Cluster Size

paxos-protocol

[unanimous] [dynamic]
Context:

service

Subcontext:

service

Default:

v3

Removed:

3.14.0

Paxos protocol version to be used in cluster. Should be one of v1, v2, v3, v4, v5 or none.

Additional information
  • v1 = Original protocol version

  • v2 = Expandable cluster size protocol version

  • v3 = SIndex query node protocol version

  • v4 = Rack Aware protocol version

  • v5 = Required server version 3.13 dynamically changing protocol.

  • none = Used only for dynamically changing protocol (only on versions < 3.13 )

Example:

paxos-protocol v5

paxos-recovery-policy

[unanimous] [dynamic]
Context:

service

Default:

auto-reset-master (as of 3.8.1)

Introduced:

3.7.0.1

Removed:

3.13 (Paxos v5)

Paxos configuration that provides better auto-recovery from cluster integrity issues caused by network instability.

Additional information
tip

Set it to auto-reset-master if cluster has frequent split-brain situations due to network flakiness.

paxos-retransmit-period

[dynamic]
Context:

service

Default:

5

Removed:

3.14

Tuning parameter for how often to run retransmit checks for paxos.

paxos-single-replica-limit

[unanimous] [static]
Context:

service

Default:

1

Removed:

6.0

If the cluster size is less than or equal to this value, only one copy of the data (no replicas) will be kept in the cluster. Only in Available mode (AP). Will be ignored for strong-consistency configured namespaces. Should typically be configured to a few nodes under the expected cluster size for clusters that would be used at near capacity (per the usual capacity sizing guidelines) but will depend on the total size of the cluster and how full the nodes are within the cluster.

Additional information
note

As this configuration parameter is currently not dynamically configurable, refer to the migrate-fill-delay for a way to prevent migrations from filling up remaining nodes when a cluster size is unexpectedly reduced.

tip

This is useful when a cluster suddenly loses a node due to failure and the remaining nodes wouldn't be able to accommodate as many replica copies as dictated by the configured replication-factor.

pidfile

[static]
Context:

service

Default:

/var/run/aerospike/asd.pid (in config)

File to store the PID of the daemon.

Additional information
note

Not needed in a systemd environment. When using systemd a PID file is not created when specifying a pidfile in the service stanza of the aerospike.conf file. The logs will return a similar warning if pidfile is specified in the aerospike.conf file:

Oct 24 2018 21:20:55 GMT: WARNING (as): (as.c:337)
will not write PID file in new-style daemon mode
tip

If the PID file is manually moved without a restart of the Aerospike service, some of the Aerospike status checks might fail. If the location path needs to be updated, you would need to update the Aerospike configuration, update the /etc/init.d script and then restart the Aerospike service to generate a new PID file.

prole-extra-ttl

[dynamic]
Context:

service

Default:

0

Introduced:

4.5.0.10 only

Removed:

4.5.0.11

When set to a non-zero value, activates garbage collection of expired replica records, and specifies the number of seconds beyond a replica record's expiration time that the record becomes eligible to be deleted by this process. Included to support clusters with nodes on either side of the SMD protocol change during lengthy upgrades. Discussed in detail in Aerospike 4.5.1 Special Upgrade Instructions.

proto-fd-idle-ms

[dynamic]
Context:

service

Default:

0

Time in milliseconds to wait before reaping connections. The default means that idle connections are never reaped. The Aerospike server uses keep-alive for client sockets as of version 4.8.

Additional information

Example: Set proto-fd-idle-ms to 70000 dynamically:

asinfo -v "set-config:context=service;proto-fd-idle-ms=70000"
ok
note

Prior to version 5.1, the default is 60000.

proto-fd-max

[dynamic]
Context:

service

Default:

15000

Maximum number of open file descriptors opened on behalf of client connections.

Can be increased for higher throughput use cases or for absorbing temporary spikes in traffic.
Minimum: 1024. Maximum: 2097152.

At Aerospike Server start, this value must not exceed the system's file descriptor limit for the asd process. To avoid a startup problem, there are two alternatives:

  • Decrease the value of proto-fd-max in your Aerospike configuration file.
  • Increase the system's file descriptor limit for the asd process.
Additional information

Example: Set proto-fd-max to 30000 dynamically. Prior to Aerospike Server version 4.9, for a dynamic change, this limit was enforced only if the new value was lower than the system setting.

asinfo -v "set-config:context=service;proto-fd-max=30000"
ok
tip

When hitting this limit, the client connections will be dropped and the following log message will be displayed:
WARNING (service): (service.c:419) (repeated:103799) refusing client connection - proto-fd-max 50000 This parameter has to be lower than the OS limit. For further details, refer to the following article:
https://discuss.aerospike.com/t/increase-maximum-number-of-openfiles/1372

proto-slow-netio-sleep-ms

[dynamic]
Context:

service

Default:

1ms

Removed:

6.0

This configuration specifies how long to sleep between repeated attempts when sending the response buffer for "slow" queries. Can be used as a throttling parameter during unexpected network congestion when response get re-queued.

Additional information

Example: asinfo -v "set-config:context=service;proto-slow-netio-sleep-ms=100"

note

This configuration is not available to be set in the configuration file. Thus, on a server restart, this would need to be dynamically configured again.

query-batch-size

[dynamic]
Context:

service

Default:

100

Removed:

6.0

Amount of disk I/O a query performs per I/O request. Refer to the Managing Queries page for further details on tuning and configuring limits for secondary queries.

Additional information

Example: Set query-batch-size to 75 dynamically:

asinfo -v "set-config:context=service;query-batch-size=75"
ok

query-buf-size

[dynamic]
Context:

service

Default:

2MB

The unit of buffer size at which network IO is performed for secondary index queries. Used to avoid too many network calls. Decreasing this would mean more frequent network IO and hence improved response at the socket level.

Additional information

Example: Set the query-buf-size to 500KB dynamically:

asinfo -v "set-config:context=service;query-buf-size=512000"
ok
note

This value can only be set dynamically. The value should be in bytes.

query-bufpool-size

[dynamic]
Context:

service

Default:

256

Removed:

5.7

This configuration specifies how many buffers to keep in a pool. This can be configured between the range of 1 to UINT32_MAX. The unit of buffer size at which network IO is performed can be configured with query-buf-size.

Additional information

Example: asinfo -v "set-config:context=service;query-bufpool-size=512"

query-in-transaction-thread

[dynamic]
Context:

service

Default:

false

Removed:

6.0

Run queries in transaction threads (server versions earlier than 4.7) or service threads (server versions 4.7 or later) instead of using query threads. Set it to ‘true’ when you expect queries to run for a short period of time or when the namespace is in-memory. Leave it set to ‘false’ if you expect longer running queries or if the namespace uses disk storage. Refer to the Managing Queries page for further details on tuning and configuring limits for secondary queries.

Additional information

Example: Set query-in-transaction-thread to true dynamically:

asinfo -v "set-config:context=service;query-in-transaction-thread=true"
ok

query-long-q-max-size

[dynamic]
Context:

service

Default:

500

Removed:

6.0

Number of queries in the long running query queue. A long running query is one that returns more records than the query-threshold. Refer to the Managing Queries page for further details on tuning and configuring limits for secondary queries.

Additional information

Example: Set query-long-q-max-size to 600 dynamically:

asinfo -v "set-config:context=service;query-long-q-max-size=600"
ok

query-max-done

[dynamic]
Context:

service

Default:

100

Introduced:

6.0

Max number of finished query kept for monitoring. Value range: 0-1000.

Additional information

Example: Set query-max-done to 500 dynamically:

asinfo -v "set-config:context=service;query-max-done=500"
ok

query-microbenchmark

[dynamic]
Context:

service

Default:

false

Introduced:

3.3.10

Removed:

6.0

Enable microbenchmarks of queries.

query-pre-reserve-partitions

[dynamic]
Context:

service

Removed:

5.7

This configuration can be used to pre-reserve all queryable partitions before processing a query. Enabling this to true might help reduce the potential inconsistency windows during ongoing migration for some use-cases, but can also have an adverse effect. Enterprise licensees can discuss specific use cases that could benefit from this parameter with Aerospike Support.

Additional information

Example: asinfo -v "set-config:context=service;query-pre-reserve-partitions=true"

query-priority

[dynamic]
Context:

service

Default:

10

Removed:

6.0

Priority for query threads. Number of sequential query elements to read before yielding (for query-sleep-us micro seconds). A higher value is a higher priority. Refer to the Managing Queries page for further details on tuning and configuring limits for secondary queries.

Additional information

Example: Set query-priority to 20 dynamically:

asinfo -v "set-config:context=service;query-priority=20"
ok

query-priority-sleep-us

[dynamic]
Context:

service

Default:

1

Removed:

6.0

Time in microseconds that the server pauses after reading query-priority sequential query elements. Refer to the Managing Queries page for further details on tuning and configuring limits for secondary queries.

Additional information

Example: Set query-priority-sleep-us to 2 dynamically:

asinfo -v "set-config:context=service;query-priority-sleep-us=2"
ok

query-rec-count-bound

[dynamic]
Context:

service

Default:

UINT64_MAX

Removed:

6.0

This is the maximum number of records a query is allowed to return. A query returning beyond this limit is aborted. This can be configured between the range of 1 to UINT64_MAX.

Additional information

Example: asinfo -v "set-config:context=service;query-rec-count-bound=512"

query-req-in-query-thread

[dynamic]
Context:

service

Removed:

6.0

This configuration set to true will cause queries to always be processed in the main query thread and would not be queued to be processed by the query-worker-threads.

Additional information

Example: asinfo -v "set-config:context=service;query-req-in-query-thread=true"

query-req-max-inflight

[dynamic]
Context:

service

Default:

100

Removed:

6.0

Number of query I/O threads used per query at one time. Refer to the Managing Queries page for further details on tuning and configuring limits for secondary queries.

Additional information

Example: Set query-req-max-inflight to 150 dynamically:

asinfo -v "set-config:context=service;query-req-max-inflight=150"
ok

query-short-q-max-size

[dynamic]
Context:

service

Default:

500

Removed:

6.0

Number of queries in the short running query queue. A short running query is one that returns fewer records than the query-threshold. Refer to the Managing Queries page for further details on tuning and configuring limits for secondary queries.

Additional information

Example: Set query-short-q-max-size to 600 dynamically:

asinfo -v "set-config:context=service;query-short-q-max-size=600"
ok

query-threads

[dynamic]
Context:

service

Default:

6

Removed:

6.0

Number of dedicated query threads on the node. Value range: 1-32. Refer to the Managing Queries page for further details on tuning and configuring limits for secondary queries. Only even values are allowed from server version 5.7 onwards. Odd values are rounded up to the next even number for pre 5.7 server versions.

Additional information

Example: Set query-threads to 12 dynamically:

asinfo -v "set-config:context=service;query-threads=12"
ok

query-threads-limit

[dynamic]
Context:

service

Default:

128

Introduced:

6.0.0

Maximum number of threads allowed for all queries. Can be dynamically increased or decreased. Value range: 1-1024.

Additional information

Example: Set query-threads-limit to 64 dynamically:

asinfo -v "set-config:context=service;query-threads-limit=64"
ok

query-threshold

[dynamic]
Context:

service

Default:

10

Removed:

6.0

Dividing line between short running and long running queries. A query that returns fewer records than the query threshold is a short running query. All others are long running queries. Refer to the Managing Queries page for further details on tuning and configuring limits for secondary queries.

Additional information

Example: Set query-threshold to 20 dynamically:

asinfo -v "set-config:context=service;query-long-q-max-size=600"
ok

query-untracked-time-ms

[dynamic]
Context:

service

Default:

1000

Removed:

6.0

Queries that run above this configured time will be tracked by default. Refer to the Managing Queries page for further details on tuning and configuring limits for secondary queries.

Additional information

Example: Set query-untracked-time-ms to 5 sec dynamically:

asinfo -v "set-config:context=service;query-untracked-time-ms=5000"
ok

query-worker-threads

[dynamic]
Context:

service

Default:

15

Removed:

6.0

Number of dedicated I/O threads on the node. Refer to the Managing Queries page for further details on tuning and configuring limits for secondary queries.

Additional information

Example: Set query-worker-threads to 20 dynamically:

asinfo -v "set-config:context=service;query-worker-threads=20"
ok

replication-fire-and-forget

[dynamic]
Context:

service

Default:

false

Removed:

3.9

Removed in 3.9. If true, will not wait for a reply from replica writes. Will NOT retry if the initial attempt fails.

Replaced by: write-commit-level-override

Additional information
tip

Improves write performance a bit more than 'respond-client-on-master-completion.'

respond-client-on-master-completion

[dynamic]
Context:

service

Default:

false

Removed:

3.3.26

Removed in 3.3.26. If true, will not wait for a reply from replica writes. Will retry multiple times if the initial fails.

Replaced by: write-commit-level-override

Additional information
note

This global setting is removed in favor of the client per-transaction write commit level policy and the server per-namespace 'write-commit-level-override' setting.

tip

Improves write performance.

run-as-daemon

[static]
Context:

service

Default:

true

If true, initial process forks into a new process (which runs in background) and exits.

Additional information
note

In 2.x the default is false.

scan-max-active

[dynamic]
Context:

service

Default:

100

Introduced:

3.6.0

Removed:

4.7.0

Max number of active scans allowed. Value range: 0-200.

Additional information

Example: Set scan-max-active to 150 dynamically:

asinfo -v "set-config:context=service;scan-max-active=150"
ok

scan-max-done

[dynamic]
Context:

service

Default:

100

Introduced:

3.6.0

Removed:

6.0.0

Max number of finished scans kept for monitoring. Value range: 0-1000.

Additional information

Example: Set scan-max-done to 500 dynamically:

asinfo -v "set-config:context=service;scan-max-done=500"
ok

This parameter was renamed to query-max-done in server 6.0.0

scan-max-udf-transactions

[dynamic]
Context:

service

Default:

32

Introduced:

3.6.0

Removed:

4.7.0

Max number of active transactions per UDF background scan. In effect, limits the number of transactions sent from a UDF scan to the transaction queues.

Additional information

Example: Set scan-max-udf-transactions to 64 dynamically:

asinfo -v "set-config:context=service;scan-max-udf-transactions=64"
ok
note

For example, let's consider scan-max-udf-transactions set at 32. For a UDF scan, the scan thread will queue 32 transactions to the transaction queue. When attempting to queue a 33rd transaction, if none of the first 32 transactions has completed, the scan thread will sleep prior to re-attempting that transaction.
Refer to the FAQ on Scans and Scan UDF Throttling Guide articles for further details.

scan-priority

[dynamic]
Context:

service

Default:

200

Removed:

3.6.0

Throttle for scan.

Additional information
note

200 means a sleep of 1 micro second every 200 records read.

caution

Do not set to 0.

scan-retransmit

[dynamic]
Context:

service

Default:

3600000

Removed:

3.3.5

Time in milliseconds to wait before doing a retransmit for scan task.

scan-threads

[dynamic]
Context:

service

Default:

4

Introduced:

3.6.0

Removed:

4.7.0

Size of scan thread pool. Can be dynamically increased or decreased. Maximum allowed value is 32 for versions prior to 3.16 and 128 for versions 3.16.0.1 and above.

Additional information

Example: Set scan-threads to 8 dynamically:

asinfo -v "set-config:context=service;scan-threads=8"
ok
caution

Typical recommended value is to match the number of cores on the host. Increasing this may impact regular transactions performance.

scan-threads-limit

[dynamic]
Context:

service

Default:

128

Introduced:

4.7.0

Removed:

6.0.0

Maximum number of threads allowed for all queries. Can be dynamically increased or decreased. Value range: 1-1024.

Additional information

Example: Set scan-threads-limit to 64 dynamically:

asinfo -v "set-config:context=service;scan-threads-limit=64"
ok

This parameter was renamed to query-threads-limit in server 6.0.0

service-threads

[dynamic]
Context:

service

Default:

(5 × #cpu) or #cpu

Number of threads receiving client requests and executing transactions. On multi-socketed systems, if Non-Uniform Memory Access (NUMA) pinning is enabled, each Aerospike instance only counts the CPU cores on the socket it is servicing.

  • For versions 4.7 and later, this defaults to five times the number of CPU cores if there is at least one SSD namespace, otherwise this defaults to number of CPU cores. Note that if all the namespaces are configured to be in memory (with or without persistence), this would again default to the number of CPU cores. Persistent memory namespaces are treated equivalently to Data-In-Memory namespaces as of version 5.1 for the purpose of computing this default. The value range is 1-4096.
  • For 3.12 up to 4.7, this defaults to the number of CPU cores, and the value range is 1-256.
Additional information
tip

For versions 4.7 and later, the recommended value is five times the number of CPUs unless there are no SSD namespaces (i.e. all namespaces are data-in-memory), in which case the recommended value is the number of CPUs. Prior to 4.7, the recommended value is the number of CPUs. Automatically defaulted to the recommended values as of version 3.12. Static for versions prior to 4.7.

sindex-builder-threads

[dynamic]
Context:

service

Default:

4

Introduced:

3.6

Number of threads for building secondary indexes. Can be set dynamically for secondary indexes created when a server is already running. To bet set in the configuration file for secondary indexes that are built or rebuilt during start up. A maximum value of 32 can be set for this config. Refer to this knowledge base article for further details.

Additional information

Example: asinfo -v 'set-config:context=service;sindex-builder-threads=5'

sindex-gc-max-rate

[dynamic]
Context:

service

Default:

50000

Introduced:

3.14.0

Removed:

5.7

The maximum processing rate (entries per second) for secondary index entries garbage collector. This refers to the number of records that have been indexed by a secondary index (entries).

Additional information

Example: asinfo -v "set-config:context=service;sindex-gc-max-rate=10000"

tip

This is an upper bound. In general, if entries are garbage collected, the effective rate would be lower. Note that in versions 3.14.0.X, the default is 1000000 which could impact overall system performance.

sindex-gc-period

[dynamic]
Context:

service

Default:

10

Introduced:

3.14.0

The interval (seconds) at which secondary index garbage collection thread runs.

As of version 4.3.0, setting sindex-gc-period to a value of 0 will disable secondary index garbage collection.

Additional information

Example: asinfo -v "set-config:context=service;sindex-gc-period=100"

note

If sindex-gc-period is dynamically set to zero while sindex garbage collection is in progress, the current cycle will complete, and then garbage collection will become dormant.

stay-quiesced

[enterprise][static]
Context:

service

Default:

false

Introduced:

5.2

If set true, the node will start up quiesced and will remain quiesced. It will also ignore the quiesce-undo command. For details on when to leverage this feature, refer to the Quiescing a node documentation page.

storage-benchmarks

[dynamic]
Context:

service

Default:

false

Removed:

3.9

Set to true to collect storage benchmarks and dump them in log to be analyzed by asloglatency. Removed in 3.9, and replaced with namespace level configuration enabled benchmarks. Refer to the Histograms from Aerospike Logs page for details.

Additional information

Here is the list of configuration enabled histograms:

ticker-interval

[dynamic]
Context:

service

Default:

10

Global configuration for how often to print 'ticker' info to the log in seconds.

Additional information

Example: Set ticker-interval to 20 dynamically:

asinfo -v "set-config:context=service;ticker-interval=20"
ok

transaction-duplicate-threads

[static]
Context:

service

Default:

0

Removed:

3.8.2.2

Worker Threads for duplicate resolution.

transaction-max-ms

[dynamic]
Context:

service

Default:

1000

How long to wait for success, in milliseconds before timing out a transaction on the server (typically, but not necessarily, during replica write or duplicate resolution). This would be overwritten by the client transaction timeout (if set). Transactions taking longer than this time (or the time specified in the client policy) will return a timeout and tick the client_write_timeout metric.

Additional information

Example: Set transaction-max-ms to 2000 dynamically:

asinfo -v "set-config:context=service;transaction-max-ms=2000"
note

The transaction-max-ms (or, if specified, the client set timeout) gets checked in 4 different places:

  • when processing of a transaction begins
  • every 130ms (prior to server 5.7, or 5ms for server 6.0 and later) when waiting in the rw-hash (see rw_in_progress)
  • every 75ms (version 5.7 or earlier or 5ms for version 6.0 and above) when waiting in the proxy-hash (see proxy_in_progress)
  • periodically during UDF execution

    By default, a transaction will therefore not be retransmitted between server nodes (typically for write proles or duplicate resolution) if the client does not specify a transaction timeout (this is independent of the client retry policy). If a transaction timeout is specified by the client, or if the transaction-max-ms is increased, a transaction would be retried as many times as possible within this time frame. For example, if a client specifies a transaction timeout of 8 seconds, assuming there are network issues preventing a write to be processed on its prole side, the fabric transaction would be retried up to 3 times, with an interval starting at 1 second (default transaction-retry-ms) and doubled for every subsequent retry.

transaction-pending-limit

[dynamic]
Context:

service

Default:

20

Removed:

4.3.1.3

Moved to namespace context as of version 4.3.1.3. Maximum pending transactions that can be queued up to work on the same key. A value of 0 removes the limit (unlimited), and a value of 1 will allow a maximum of 1 transaction to be queued up in the rw-hash behind a transaction that is already in progress.

Additional information

Example: Set transaction-pending-limit to 3 dynamically:

asinfo -v "set-config:context=service;transaction-pending-limit=3"
ok
note

Increase this limit if the application works on a small set of keys more frequently. If this value is exceeded the overflow transactions will fail and the client will receive an error code 14 Key Busy (tracked on the server side under the fail_key_busy statistic).

transaction-queues

[static]
Context:

service

Default:

#cpu

Removed:

4.7

Number of transaction queues managing client requests. In version 3.12 and above this is set by default to the number of CPU cores available. In previous versions, the default is 4. Service threads will dispatch transactions into those queues (round robin). Value range: 1-128.

Additional information
note

Version 4.7 unified transaction threads and service threads. As of version 4.7, service-threads is the only configuration option available to control the number of network and transaction processing threads.

tip

Typical recommended value is to match the number of CPU cores on the host.

transaction-repeatable-read

[dynamic]
Context:

service

Default:

false

Removed:

3.3.26

This flag temporarily relaxes read consistency during a cluster reconfiguration in order to maintain high read performance.

Obsoleted by: read-consistency-level-override

Additional information
note

This global setting is removed in favor of the client per-transaction read consistency level policy and the server per-namespace 'read-consistency-level-override' setting.

transaction-retry-ms

[dynamic]
Context:

service

Default:

1002

How long to wait for success, in milliseconds, before retrying a transaction. This also governs migration related transactions until version 3.10.1. In versions following 3.10.1, a new configuration, migrate-retransmit-ms is used for the migration related retransmits. The default of 1002 is meant to avoid retransmission by default based on the default transaction-max-ms.

Additional information

Example: Set transaction-retry-ms to 500 dynamically:

asinfo -v "set-config:context=service;transaction-retry-ms=500"
ok

transaction-threads-per-queue

[dynamic]
Context:

service

Default:

4

Removed:

4.7

Number of threads per transaction queue. Those threads will consume the requests from the transaction queues. This is not dynamically configurable in releases prior to 3.11. Value range: 1-256.

Additional information

Example: Set transaction-threads-per-queue dynamically to 6, only for versions post 3.11:

asinfo -v "set-config:context=service;transaction-threads-per-queue=6"
ok
note

Version 4.7 unified transaction threads and service threads. As of version 4.7, service-threads is the only configuration option available to control the number of network and transaction processing threads.

tip

The optimal value will depend on the workload and object size. For non-data in memory namespaces with small object size (~1 KiB), 3 threads per transaction queue is the optimal value. In general, a low number (between 3 and 8) is sufficient. The total number of transaction threads will be the product of transaction-queues and transaction-threads-per-queue.

use-queue-per-device

[static]
Context:

service

Default:

false

Removed:

3.11

Specify the number to queues to be set automatically per device.

user

[static]
Context:

service

User to run as.

Additional information
note

Effective even before log file gets created.

vault-ca

[enterprise][static]
Context:

service

Introduced:

5.1

Path on Aerospike node to TLS certificate for authentication with Vault server. See the Vault integration documentation for further details.

vault-path

[enterprise][static]
Context:

service

Introduced:

5.1

The path on the Vault system to the stored secret. See the Vault integration documentation for further details.

Additional information
caution

Do not add the exact secret name as a suffix; this is supplied as the value of the Aerospike configuration parameter.

vault-token-file

[enterprise][static]
Context:

service

Introduced:

5.1

Path on Aerospike node to a file that contains a token that identifies the Aerospike server to the Vault server. This token is either from your orchestration system or a manual definition on the Vault system. See the Vault integration documentation for further details.

vault-url

[enterprise][static]
Context:

service

Introduced:

5.1

Protocol, domain name or IP address, and port of Vault service. See the Vault integration documentation for further details.

work-directory

[static]
Context:

service

Default:

/opt/aerospike

Directory to be used by the Aerospike process to store all metadata and system files.

Additional information
note

If this directory is user specified, the Aerospike process must have read/write permission on that directory.

write-duplicate-resolution-disable

[dynamic]
Context:

service

Default:

false

Removed:

3.15.0.1

Removed in version 3.15.0.1. Replaced with namespace level configuration disable-write-dup-res as of version 3.15.1.3. Disables write duplicate resolution. Write duplicate resolution is needed after recovering from node maintenance/failure or a partition. We chase different versions of a record prior to applying the update. This only applies during migrations when multiple versions of a given partition may exist.

Additional information
tip

Setting to true will disable write duplicate resolution which can improve write performance during migrations but may also result in lost updates. This setting has been removed as the performance impact of a write duplicate resolution in the recent releases has been drastically reduced. For special cases where performance is more important than potentially inaccurately updating records, this configuration has been reintroduced at the namespace level in version 3.15.1.3, under the name disable-write-dup-res.

xdr

auth-mode

[enterprise][dynamic]
Context:

xdr

Subcontext:

dc

Default:

none (as of 5.7)

Introduced:

4.7

This parameter specifies the authentication mode to be used b