Skip to main content

Metrics Reference

Metrics are divided into locations that correspond to the asinfo command's context parameter to display the metrics. For command examples, see Examples below.

Examples

Statistics:

Statistics is a container for overall database health and service metrics, which can be accessed with:

asinfo -h <host ip> -v 'statistics' -l

or using asadm:

Admin> show statistics service

Namespace:

Namespace contains health metrics for a particular namespace, which can be displayed with asinfo:

asinfo -h <host ip> -v 'namespace/<namespace name>'

or using asadm, for all namespaces statistics:

Admin> show statistics namespace

or for a specific namespace:

Admin> show statistics namespace for <namespaceName>

For set statistics:

Admin> show statistics set

or

asinfo -v sets

For set statistics for a specific namespace:

Admin> show statistics set for <namespaceName>

or

asinfo -v sets/<namespaceName>

For a specific set statistics for a specific namespace:

asinfo -v sets/<namespaceName>/<set name>

For secondary index statistics:

Admin> show statistics sindex

or

asinfo -v sindex -l

For secondary index statistics for a specific namespace:

Admin> show statistics sindex for <namespaceName>

or

asinfo -v sindex/<namespaceName> -l

For secondary index statistics for a specific namespace for a specific secondary index. Partial names can be used instead of full names:

Admin> show statistics sindex for <namespaceName> for <sindex name>

or

asinfo -v sindex/<namespace name>/<sindex name> -l

XDR:

XDR includes health and service metrics for Cross-Datacenter Replication.

Do not use the XDR statistics to verify if a namespace is enabled. XDR statistics are still displayed even if the namespace is not currently enabled. After the namespace is enabled, gathering of statistics resumes where it had left off.

There are no independent namespace-level XDR statistics.

Statistics are per-datacenter or per-datacenter/per-namespace.

Display XDR statistics for an entire datacenter:

asinfo -h hostIPaddress -v "get-stats:context=xdr;dc=DC1"

Display XDR statistics for a specific namespace:

asinfo -h hostIPaddress -v "get-stats:context=xdr;dc=DC1;namespace=ns"

Search metrics

409 removed metrics

Bins

bin-names-quota

[instantaneous][integer]
Location:

Bins

Monitoring:

optional

Removed:

3.9

Quota of bin names for the namespace (32,768). Replaced by bin_names_quota as of version 3.9.

bin_names

[instantaneous][integer]
Location:

Bins

Monitoring:

optional

Introduced:

3.9

Number of bin names used for the namespace.

The formula for the associated metrics is as follows:

bin_names_quota - bin_names = available_bin_names

bin_names_quota

[instantaneous][integer]
Location:

Bins

Monitoring:

optional

Introduced:

3.9

Quota of bin names for the namespace, which is fixed at 65,535 for Aerospike Server version 5.0. (In earlier versions, the limit was 32,767.)

The formula for the associated metrics is as follows:

bin_names_quota - bin_names = available_bin_names

If you have met the quota, see KB article How to clear up set and bin names when it exceeds the maximum set limit.

num-bin-names

[instantaneous][integer]
Location:

Bins

Monitoring:

optional

Removed:

3.9

Number of bin names used for the namespace. Replaced by bin_names as of version 3.9.

Namespace

appeals_records_exonerated

[enterprise][cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.0

Number of records that were marked replicated as result of an appeal. Partition appeals will happen for namespaces operating under the strong-consistency mode when a node needs to validate the records it has when joining the cluster.

appeals_rx_active

[enterprise][instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.0

Number of partition appeals currently being received. Partition appeals will happen for namespaces operating under the strong-consistency mode when a node needs to validate the records it has when joining the cluster.

appeals_tx_active

[enterprise][instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.0

Number of partition appeals not yet sent. Partition appeals will happen for namespaces operating under the strong-consistency mode when a node needs to validate the records it has when joining the cluster.

appeals_tx_remaining

[enterprise][instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.0

Number of partition appeals currently being sent. Partition appeals will happen for namespaces operating under the strong-consistency mode when a node needs to validate the records it has when joining the cluster. Appeals occur after a node has been cold-started. The replication state of each record is lost on cold-start and all records must assume an unreplicated state. An appeal resolves replication state from the partition's acting master. These are important for performance; an unreplicated record will need to re-replicate to be read which adds latency. During a rolling cold-restart, an operator may want to wait for the appeal phase to complete after each restart to minimize the performance impact of the procedure.

Additional information
note

Appeals happen prior to migrations starting. Appeals could strain the nodes having to assist the appeals. The nodes assisting a node that is going through the appeals phase are the nodes who took over master-hood ownership while the node that is appealing was down. Once appeals have completed, migrations can start.

available-bin-names

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Removed:

3.9

Remaining number of unique bins that the user can create for this namespace. Replaced by available_bin_names as of version 3.9.

available_bin_names

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Remaining number of unique bins that the user can create for this namespace.

The formula for the associated metrics is as follows:

bin_names_quota - bin_names = available_bin_names

available_pct

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Removed:

3.9

Measures the minimum contiguous disk space for all disks in a namespace. Replaced by device_available_pct as of version 3.9.

Additional information

Example:

IF available_pct drops below 20%,
THEN warn your operations group.

This condition might indicate that defrag is unable to keep up with the current load. IF available_pct drops below 15%, THEN critical ALERT. Usable disk resources are critically low.

If available_pct drops below 5%, the condition might result in stop_writes.

batch_sub_delete_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of batch-index delete subtransactions that failed with an error. For example: invalid set name, unavailable (if SC), failure to apply a predexp filter, key mismatch if key was sent), device error (i/o error), key busy (duplicate resolution or if SC), problem during bitwise, HLL or CDT.

batch_sub_delete_filtered_out

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of batch-index delete subtransactions that did not happen because the record was filtered out with Filter Expressions.

batch_sub_delete_not_found

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of batch-index delete subtransactions that resulted in not found.

batch_sub_delete_success

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

6.0

Number of records successfully deleted by batch-index subtransactions.

batch_sub_delete_timeout

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of batch-index delete subtransactions that timed out.

batch_sub_lang_delete_success

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of successful batch-index udf delete subtransactions.

batch_sub_lang_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of language (Lua) batch-index errors for udf subtransactions.

batch_sub_lang_read_success

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of successful batch-index udf read subtransactions.

batch_sub_lang_write_success

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of successful batch-index udf write subtransactions.

batch_sub_proxy_complete

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of proxied batch-index subtransactions that completed.

batch_sub_proxy_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of proxied batch-index sub transactions that failed with an error.

batch_sub_proxy_timeout

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of proxied batch-index subtransactions that timed out.

batch_sub_read_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of batch-index read subtransaction that failed with an error. For example: invalid set name, unavailable (if SC), failure to apply a predexp filter, key mismatch if key was sent), device error (i/o error), key busy (duplicate resolution or if SC), problem during bitwise, HLL or CDT.

batch_sub_read_filtered_out

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.7

Number of batch-index read subtransactions that did not happen because the record was filtered out with Filter Expressions.

batch_sub_read_not_found

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of batch-index read subtransaction that resulted in not found.

batch_sub_read_success

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.9

Number of records successfully read by batch-index subtransactions.

batch_sub_read_timeout

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of batch-index read subtransactions that timed out.

batch_sub_tsvc_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of batch-index read subtransactions that failed with an error in the transaction service, before attempting to handle the transaction.
for example, protocol errors or security permission mismatches. In strong-consistency enabled namespaces, this includes transactions against unavailable_partitions and dead_partitions.

batch_sub_tsvc_timeout

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of batch-index read subtransactions that timed out in the transaction service, before attempting to handle the transaction.

batch_sub_udf_complete

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of completed batch-index udf subtransactions (for scan/query background udf jobs). Refer to the batch_sub_lang_delete_success, batch_sub_lang_error, batch_sub_lang_read_success, batch_sub_lang_write_success statistics for the underlying operation statuses.

batch_sub_udf_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of failed batch-index udf subtransactions (for scan/query background udf jobs). Does not include timeouts. Refer to the batch_sub_lang_delete_success, batch_sub_lang_error, batch_sub_lang_read_success, batch_sub_lang_write_success statistics for the underlying operation statuses.

batch_sub_udf_filtered_out

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of batch-index udf subtransactions that did not happen because the record was filtered out with Filter Expressions.

batch_sub_udf_timeout

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of batch-index udf subtransactions that timed out (for scan/query background udf jobs). Refer to the batch_sub_lang_delete_success, batch_sub_lang_error, batch_sub_lang_read_success, batch_sub_lang_write_success statistics for the underlying operation statuses.

batch_sub_write_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of batch-index write subtransactions that failed with an error. For example: invalid set name, unavailable (if SC), failure to apply a predexp filter, key mismatch if key was sent), device error (i/o error), key busy (duplicate resolution or if SC), problem during bitwise, HLL or CDT.

batch_sub_write_filtered_out

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of batch-index write subtransactions that did not happen because the record was filtered out with Filter Expressions.

batch_sub_write_success

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

6.0

Number of records successfully written by batch-index subtransactions.

batch_sub_write_timeout

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of batch-index write subtransactions that timed out.

cache-read-pct

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Removed:

3.9

Percentage of read transactions that are hitting the post write queue and will save an io to the underlying storage device. Replaced by cache_read_pct as of version 3.9.

cache_read_pct

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Percentage of read transactions that are hitting the post-write-queue (or the blocks in the max-write-cache) and will save an io to the underlying storage device.
Refer to the post-write-queue and read-page-cache configuration parameters for ways to improve read intensive workloads latency by leveraging those 2 different caching options.

Reads from update transactions as well as migrations, scans, XDR reads and anything that tries to load a record off the device will be accounted for in the cache_read_pct figures.

client_delete_error

[cumulative][integer]
Location:

Namespace

Monitoring:

alert

Introduced:

3.9

Number of client delete transactions that failed with an error.

Additional information

Example:

Compare client_delete_error to client_delete_success.

IF ratio is higher than acceptable,
THEN alert operations to investigate.

client_delete_filtered_out

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.7

Number of client delete transactions that did not happen because the record was filtered out with Filter Expression.

client_delete_not_found

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.9

Number of client delete transactions that resulted in a not found.

client_delete_success

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.9

Number of successful client delete transactions.

client_delete_timeout

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of client delete transactions that timed out.

client_lang_delete_success

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of client initiated udf transactions that successfully deleted a record.

client_lang_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of client initiated udf transactions that failed with a language (Lua) error during udf execution.

client_lang_read_success

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of successful client initiated udf read transactions.

client_lang_write_success

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of successful client initiated udf write transactions.

client_proxy_complete

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of completed proxy transactions initiated by a client request.

client_proxy_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of proxy transactions initiated by a client request that failed with an error.

client_proxy_timeout

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of proxy transactions initiated by a client request that timed out.

client_read_error

[cumulative][integer]
Location:

Namespace

Monitoring:

alert

Introduced:

3.9

Number of client read transaction errors. For example: invalid set name, unavailable (if SC), failure to apply a predexp filter, key mismatch if key was sent), device error (i/o error), key busy (duplicate resolution or if SC), problem during bitwise, HLL or CDT.

Additional information

Example:

Compare client_read_error to client_read_success.

IF ratio is higher than acceptable,
THEN alert operations to investigate.

client_read_filtered_out

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.7

Number of client read transactions that did not happen because the record was filtered out with Filter Expression.

client_read_not_found

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.9

Number of client read transaction that resulted in not found.

client_read_success

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.9

Number of successful client read transactions. Does not include records read by batch-reads or scans. batch-reads have the separate batch_sub_read_success metric. Scans have separate metrics depending on the type of scan between scan_basic_complete, scan_aggr_complete, scan_ops_bg_complete, and scan_udf_bg_complete metrics.

client_read_timeout

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.9

Number of client read transaction that timed out.

client_tsvc_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of client transactions that failed in the transaction service, before attempting to handle the transaction. For example protocol errors or security permission mismatch. In strong-consistency enabled namespaces, this includes transactions against unavailable_partitions and dead_partitions.

client_tsvc_timeout

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of client transactions that timed out while in the transaction service, before attempting to handle the transaction. At this stage the transaction has not yet been identified as a read or a write, but the namespace is known. Likely cause with server versions prior to 4.7 is a congestion in the transaction queue (transaction threads not able to process efficiently enough); with server versions 4.7 or later, there may not be enough service threads to keep pace with the workload. Other common situations falling into this category would be transactions that have to be retried after waiting in the rw-hash (for example hotkeys) and use cases where the timeout set by the client is too aggressive.

client_udf_complete

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.9

Number of completed udf transactions initiated by the client.

client_udf_error

[cumulative][integer]
Location:

Namespace

Monitoring:

alert

Introduced:

3.9

Number of failed udf transactions initiated by the client. Does not include timeouts. See the server log file for more information about the error. Note that the error is also returned to the client.

Additional information

Example:

Compare client_udf_error to client_udf_complete.

IF ratio is higher than acceptable,
THEN alert operations to investigate.

client_udf_filtered_out

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.7

Number of client udf transactions that did not happen because the record was filtered out with Filter Expressions.

client_udf_timeout

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.9

Number of udf transactions initiated by the client that timed out. The timeout error is returned to the client.

client_write_error

[cumulative][integer]
Location:

Namespace

Monitoring:

alert

Introduced:

3.9

Number of client write transactions that failed with an error. Would include common errors like fail_generation, fail_key_busy, fail_record_too_big, fail_xdr_forbidden as well as some other less common ones. Refer to this article for further details on the type of errors that will increment this statistic.

Additional information

Example:

Compare client_write_error to client_write_success.

IF ratio is higher than acceptable,
THEN alert operations to investigate.

For more details, see to the knowledge base article on Understanding Client Write Errors.

client_write_filtered_out

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.7

Number of client write transactions that did not happen because the record was filtered out with Filter Expressions.

client_write_success

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.9

Number of successful client write transactions. This included xdr_write_success in Aerospike Server releases prior to version 4.5.1.

client_write_timeout

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.9

Number of client write transactions that timed out on the server. On a stable cluster with no migrations in progress this metric will indicate the number of replica write timeouts. A timeout error will be returned to the client. In strong-consistency enabled namespaces, the record is marked as unreplicated and will re-replicate.

Additional information

The following conditions can cause this metric to increment:

Every single write replica failure (master failing to replicate) ends up incrementing the client_write_timeout metric.

If duplicate resolution is enabled for writes (default), during migrations, the client_write_timeout metric will also increment if there is a timeout during duplicate resolution and could occur before we apply the write on the master side.

Refer to the transaction-max-ms configuration parameters for details on when the server checks for timeout. Transactions can also timeout earlier in the transaction flow, in which case, the client_tsvc_timeout statistic would increment.

clock_skew_stop_writes

[instantaneous][boolean]
Location:

Namespace

Monitoring:

alert

Introduced:

4.0

Namespace will stop accepting client writes when true.

For strong-consistency enabled namespaces, will be true if the clock skew is outside of tolerance (typically 20 seconds).

For Available mode (AP) namespaces running versions 4.5.1 or above and where nsup is enabled (i.e. nsup-period not zero), will be true if the cluster clock skew exceeds 40 seconds. In such occurrences, nsup will also not run, disabling record expirations and evictions until the clock skew falls back in the tolerated range.

Additional information

Example:

IF clock_skew_stop_writes is true,
THEN critical ALERT.

Ensure clocks are synchronized across the cluster.

current-time

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Removed:

3.9

Current time represented as Aerospike epoch time.

Additional information

Example:

IF the cluster_max(current-time) and cluster_min(current-time) differ by more than 10 seconds,
THEN critical ALERT.

Server time skew might indicate that NTP or similar service is not running on this node.

current_time

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Current time represented as Aerospike epoch time.

Additional information

Example:

IF cluster_max(current_time) and cluster_min(current_time) differ by more than 10 seconds,
THEN critical ALERT.

Server time skew might indicate that NTP or similar service is not running on this node.

data-used-bytes-memory

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Removed:

3.9

Amount of memory occupied by data. Replaced with memory_used_data_bytes as of 3.9.

dead_partitions

[enterprise][instantaneous][integer]
Location:

Namespace

Monitoring:

alert

Introduced:

4.0

Number of dead partitions for this namespace (when using strong-consistency). This is the number of partitions that are unavailable when all roster nodes are present. Requires the use of the revive command to make them available again. Revived nodes restore availability only when all nodes are trusted.

Additional information

Example:

IF dead_partitions is not zero, THEN critical ALERT. If you are certain that there are no potential data inconsistencies or if data inconsistencies are acceptable, consider issuing revive and recluster commands.

note

A typical scenario where partitions would be marked as dead for a strong-consistency enabled namespace would be when a number of nodes greater than replication-factor are taken out of the cluster without a clean shutdown, or have their storage erased (even if migrations complete between each node). Even though the data is fully present in the cluster, the remaining nodes in the cluster wouldn't know whether the departed nodes potentially did accept any write transactions and therefore cannot guarantee the integrity of the partitions that had all their replicas across those nodes. For example, for a replication factor 2 namespace configured as strong consistent on a 10 node cluster, shutting down one node, waiting for migrations to complete, then shutting down a second node, erasing storage and bringing both nodes back in results in approximately 90 partitions [2x(4096/(10x9))] being marked as dead. Invoking the revive and recluster commands provide 100% availability, and, in this particular case, no data inconsistencies. Dead partitions turn into unavailable_partitions every time the roster is not complete for a namespace. Refer to Configuring Strong Consistency and Consistency Management for further details.

deleted_last_bin

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9.1

Number of objects deleted because their last bin was deleted.

device_available_pct

[instantaneous][integer]
Location:

Namespace

Monitoring:

alert

Introduced:

3.9

Measures the minimum contiguous disk space across all devices in a namespace. Replaces available_pct as of version 3.9. The namespace will be read only (stop writes) if this value falls below min-avail-pct. It is important for all configured devices in a namespace to have the same size, otherwise, the device_available_pct could be low even when a lot of space is available across other devices.

Additional information

Not to be confused with device_free_pct which represents the amount of free space across all devices in a namespace and does not take account of the fragmentation.
Here is an example to represent the difference between device_free_pct and device_available_pct. Let's assume 5 devices of 100MB each for a given namespace, where each device has 25MB of data that are spread across 50 write blocks (let's assume a 1MB write-block-size):
- The device_free_pct would be 75%. - The device_available_pct would be 50%. - If the distribution is not uniform (it usually is not perfectly uniform) the device_available_pct would represent the device that has the least free blocks. Example:

IF device_available_pct drops below 20%,
THEN warn your operations group.

This condition might indicate that defrag is unable to keep up with the current load. IF device_available_pct drops below 15%,
THEN critical ALERT.

If device_available_pct drops below 5%, usable disk resources are critically low. This condition might result in stop_writes.

device_compression_ratio

[enterprise][moving average][decimal]
Location:

Namespace

Monitoring:

watch

Introduced:

4.5.0.1

Measures the average compressed size to uncompressed size ratio. Thus 1.000 indicates no compression and 0.100 indicates a 1:10 compression ratio (90% reduction in size). Note that device_compression_ratio will not be included if the compression configuration parameter is set to none.

Additional information

The compression ratio is a moving average. It is calculated based on the most recently written records. Read records do not factor into the ratio. Records that don't try to compress are not included in the moving average. If the written data changes over time then the compression ratio will change with it. In case of a sudden change in data, the indicated compression ratio may lag behind a bit. As a rule of thumb, assume that the compression ratio covers the most recently written 100,000 to 1,000,000 records.

device_free_pct

[instantaneous][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.9

Percentage of disk capacity free for this namespace. This is the amount of free storage across all devices in the namespace. Evictions will be triggered when the used percentage across all devices (which is represented by 100 - device_free_pct) crosses the configured high-water-disk-pct.

Additional information

Not to be confused with device_available_pct which represents the amount of free contiguous space on the device that has the least contiguous free space across the namespace.
Here is an example to represent the difference between device_free_pct and device_available_pct. Let's assume 5 devices of 100MB each for a given namespace, where each device has 25MB of data that are spread across 50 write blocks (let's assume a 1MB write-block-size):
- The device_free_pct would be 75%. - The device_available_pct would be 50%. - If the distribution is not uniform (it usually is not perfectly uniform) the device_available_pct would represent the device that has the least free blocks.

device_total_bytes

[instantaneous][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.9

Total bytes of disk space allocated to this namespace on this node.

device_used_bytes

[instantaneous][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.9

Total bytes of disk space used by this namespace on this node.

Additional information

Example:

Trending device_used_bytes provides operations insight into how disk usage changes over time for this namespace.

dup_res_ask

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

5.5

Number of duplicate resolution requests made by the node to other individual nodes.

dup_res_respond_no_read

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

5.5

Number of duplicate resolution requests handled by the node without reading the record.

dup_res_respond_read

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

5.5

Number of duplicate resolution requests handled by the node where the record was read.

effective_is_quiesced

[enterprise][instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.3.1

Reports 'true' when the namespace has rebalanced after previously receiving a quiesce info request.

effective_prefer_uniform_balance

[enterprise][instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.3

Applies only to Enterprise Version. Value can be true or false. If Aerospike applied the uniform balance algorithm for the current cluster state, the value returned is true. If any node having this namespace isn't configured with prefer-uniform-balance true, the value returned is false and uniform balance algorithm is disabled for this namespace on all participating nodes.

effective_replication_factor

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.15.1.3

The effective replication factor for the namespace. The configured namespace replication factor is returned as part of the namespace configuration under replication-factor for server versions 3.15.1.3 and later, and under repl-factor for earlier versions. The effective replication factor is less than the set replication factor if the cluster size is smaller than the set replication factor (in which case the effective replication factor would match the cluster size) or, with versions 5.7 and earlier, if the paxos-single-replica-limit size is reached (in which case the effective replication factor is 1).

evict_ttl

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

The current eviction depth, or the highest ttl of records that have been evicted, in seconds.

evict_void_time

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

The current eviction depth, expressed as a void time in seconds since 1 January 2010 UTC.

evicted-objects

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Removed:

3.9

Number of objects evicted from this namespace on this node since the server started. Replaced with evicted_objects as of version 3.9.

evicted_objects

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.9

Number of objects evicted from this namespace on this node since the server started.

expired-objects

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Removed:

3.9

Number of objects expired from this namespace on this node since the server started. Replaced with expired_objects as of version 3.9.

expired_objects

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.9

Number of objects expired from this namespace on this node since the server started.

fail_client_lost_conflict

[enterprise][cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

5.6

Number of client write transactions (non-xdr) that failed because some bin's last-update-time is greater than the transaction time. Error code 28 would be returned. This can happen only when the XDR's bin convergence feature is enabled. This can happen due to either:
- a clock skew across DCs which causes XDR write transaction to write bins with a future timestamp compared to local time. - race condition between incoming XDR write transaction and local client write transaction.

See also fail_xdr_lost_conflict & cluster_max_compatibility_id.

fail_generation

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.9

Number of read/write transactions failed on generation check. Replaces err_write_fail_generation as of version 3.9.

fail_key_busy

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.9

Number of read/write transactions failed on 'hot keys', meaning there were already a number of transactions higher than transaction-pending-limit for the same record waiting in the rw-hash or rw_in_progress. For read this can only happen when duplicate resolution is necessary. Replaces err_rw_pending_limit as of version 3.9.

Additional information

Detail level logging for the rw context will log transactions (digest) triggering this error. Read transactions would only fail Example:

IF the application is not expected to have hot keys and fail_key_busy rate of change exceeds expectations,>br /> THEN this condition might indicate a problem with the application.

fail_record_too_big

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.9

Number of write transactions failed due to record so big it breaches the write-block-size or max-record-size. Only counts client writes failures on master side. Replaces err_write_fail_record_too_big as of version 3.9.

Additional information

Detail level logging for the rw context will log transactions (digest) triggering this error (originating from client side master writes). Enabling detail level logging for the drv_ssd context will log all attempts at writing records that are too big, including replica-writes, immigration (migrations) writes and applying duplicate resolution winners. Refer to the FAQ - Write Block Size knowledge base article for other details.

fail_xdr_forbidden

[enterprise][cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.9

Number of read/write transactions failed due to configuration restriction. Error code 22 would be returned. This will count any of the traffic that would be rejected due to either:
- incoming xdr traffic (xdr-write stat) and allow-xdr-writes set to false.
- non XDR write traffic and allow-nonxdr-writes set to false.

fail_xdr_lost_conflict

[enterprise][cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

5.6

Number of XDR write transactions that did not succeed in updating all the attempted bins. Only a subset of bin updates might have failed or all the bin updates might have failed. This can happen only when the XDR's bin convergence feature is enabled. If a conflicting write happens on the same record across 2 or more DCs, the bin with lower last-update-time will lose during XDR shipping.

See also fail_client_lost_conflict.

free-pct-disk

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Removed:

3.9

Percentage of disk capacity free for this namespace. Replaced with device_free_pct as of 3.9.

free-pct-memory

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Removed:

3.9

Percentage of memory capacity free for this namespace. Replaced with memory_free_pct as of 3.9.

from_proxy_batch_sub_delete_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of batch-index delete subtransactions proxied from another node that failed with an error.

from_proxy_batch_sub_delete_filtered_out

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of batch-index delete subtransactions proxied from another node that did not happen because the record was filtered out with Filter Expressions.

from_proxy_batch_sub_delete_not_found

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of batch-index delete subtransactions proxied from another node that resulted in not found.

from_proxy_batch_sub_delete_success

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of records successfully deleted by batch-index subtransactions proxied from another node.

from_proxy_batch_sub_delete_timeout

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of batch-index delete subtransactions proxied from another node that timed out.

from_proxy_batch_sub_lang_delete_success

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of successful batch-index udf delete subtransactions proxied from another node.

from_proxy_batch_sub_lang_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of language (Lua) batch-index errors for udf subtransactions proxied from another node.

from_proxy_batch_sub_lang_read_success

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of successful batch-index udf read subtransactions proxied from another node.

from_proxy_batch_sub_lang_write_success

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of successful batch-index udf write subtransactions proxied from another node.

from_proxy_batch_sub_read_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of batch-index read sub-transactions proxied from another node that failed with an error.

from_proxy_batch_sub_read_filtered_out

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.7

Number of batch-index read subtransactions proxied from another node that did not happen because the record was filtered out with Filter Expressions.

from_proxy_batch_sub_read_not_found

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of batch-index read subtransactions proxied from another node that resulted in not found.

from_proxy_batch_sub_read_success

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of records successfully read by batch-index subtransactions proxied from another node.

from_proxy_batch_sub_read_timeout

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of batch-index read subtransactions proxied from another node that timed out.

from_proxy_batch_sub_tsvc_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of batch-index read subtransactions proxied from another node that failed with an error in the transaction service, before attempting to handle the transaction. For example, protocol errors or security permission mismatch. In strong-consistency enabled namespaces, this will include transactions against unavailable_partitions and dead_partitions.

from_proxy_batch_sub_tsvc_timeout

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of batch-index read subtransactions proxied from another node that timed out in the transaction service, before attempting to handle the transaction.

from_proxy_batch_sub_udf_complete

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of completed batch-index udf subtransactions proxied from another node (for scan/query background udf jobs). Refer to the from_proxy_batch_sub_lang_delete_success, from_proxy_batch_sub_lang_error, from_proxy_batch_sub_lang_read_success, from_proxy_batch_sub_lang_write_success statistics for the underlying operation statuses.

from_proxy_batch_sub_udf_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of failed batch-index udf subtransactions proxied from another node (for scan/query background udf jobs). Does not include timeouts. Refer to the from_proxy_batch_sub_lang_delete_success, from_proxy_batch_sub_lang_error, from_proxy_batch_sub_lang_read_success, from_proxy_batch_sub_lang_write_success statistics for the underlying operation statuses.

from_proxy_batch_sub_udf_filtered_out

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of batch-index udf subtransactions proxied from another node that did not happen because the record was filtered out with Filter Expressions.

from_proxy_batch_sub_udf_timeout

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of batch-index udf subtransactions proxied from another node that timed out (for scan/query background udf jobs). Refer to the from_proxy_batch_sub_lang_delete_success, from_proxy_batch_sub_lang_error, from_proxy_batch_sub_lang_read_success, from_proxy_batch_sub_lang_write_success statistics for the underlying operation statuses.

from_proxy_batch_sub_write_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of batch-index write subtransactions proxied from another node that failed with an error.

from_proxy_batch_sub_write_filtered_out

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of batch-index write subtransactions proxied from another node that did not happen because the record was filtered out with Filter Expressions.

from_proxy_batch_sub_write_success

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of records successfully written by batch-index subtransactions proxied from another node.

from_proxy_batch_sub_write_timeout

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of batch-index write subtransactions proxied from another node that timed out.

from_proxy_delete_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of errors for delete transactions proxied from another node. This includes xdr_from_proxy_delete_error.

from_proxy_delete_filtered_out

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.7

Number of delete transactions proxied from another node that did not happen because the record was filtered out with Filter Expressions.

from_proxy_delete_not_found

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of delete transactions proxied from another node that resulted in not found. This includes xdr_from_proxy_delete_not_found.

from_proxy_delete_success

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of successful delete transactions proxied from another node. This includes xdr_from_proxy_delete_success.

from_proxy_delete_timeout

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of timeouts for delete transactions proxied from another node. This includes xdr_from_proxy_delete_timeout.

from_proxy_lang_delete_success

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of successful udf delete transactions proxied from another node.

from_proxy_lang_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of language (Lua) errors for udf transactions proxied from another node.

from_proxy_lang_read_success

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of successful udf read transactions proxied from another node.

from_proxy_lang_write_success

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of successful udf write transactions proxied from another node.

from_proxy_read_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of errors for read transactions proxied from another node.

from_proxy_read_filtered_out

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.7

Number of read transactions proxied from another node that did not happen because the record was filtered out with Filter Expressions.

from_proxy_read_not_found

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of read transactions proxied from another node that resulted in not found.

from_proxy_read_success

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of successful read transactions proxied from another node.

from_proxy_read_timeout

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of timeouts for read transactions proxied from another node.

from_proxy_tsvc_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of transactions proxied from another node that failed in the transaction service, before attempting to handle the transaction. For example protocol errors or security permission mismatch. In strong-consistency enabled namespaces, this will include transactions against unavailable_partitions and dead_partitions.

from_proxy_tsvc_timeout

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of transactions proxied from another node that timed out while in the transaction service, before attempting to handle the transaction. At this stage the transaction has not yet been identified as a read or a write, but the namespace is known. For servers prior to 4.7, a possible cause is congestion in the transaction queue (transaction threads not able to process efficiently enough), while for server 4.7 or later, there could be congestion in the internal transaction queue. Or it could also be that the timeout set by the client is too aggressive.

from_proxy_udf_complete

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of successful udf transactions proxied from another node.

from_proxy_udf_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of errors for udf transactions proxied from another node.

from_proxy_udf_filtered_out

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.7

Number of udf transactions proxied from another node that did not happen because the record was filtered out with Filter Expressions.

from_proxy_udf_timeout

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of timeouts for udf transactions proxied from another node.

from_proxy_write_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of errors for write transactions proxied from another node. This includes xdr_from_proxy_write_error.

from_proxy_write_filtered_out

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.7

Number of write transactions proxied from another node that did not happen because the record was filtered out with Filter Expressions.

from_proxy_write_success

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of successful write transactions proxied from another node. This includes xdr_from_proxy_write_success.

from_proxy_write_timeout

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of timeouts for write transactions proxied from another node. This includes xdr_from_proxy_write_timeout.

geo_region_query_cells

[cumulative][integer]
Location:

Namespace

Monitoring:

Introduced:

3.9

Number of cell coverings for query region queried.

geo_region_query_falsepos

[instantaneous][integer]
Location:

Namespace

Monitoring:

Introduced:

3.9

Number of points outside the region. Total query result points is geo_region_query_points + geo_region_query_falsepos.

geo_region_query_points

[instantaneous][integer]
Location:

Namespace

Monitoring:

Introduced:

3.9

Number of points within the region. Total query result points is geo_region_query_points + geo_region_query_falsepos.

geo_region_query_reqs

[cumulative][integer]
Location:

Namespace

Monitoring:

Introduced:

3.9

Number of geo queries on the system since the uptime of the node.

hwm-breached

[instantaneous][boolean]
Location:

Namespace

Monitoring:

optional

Removed:

3.9

If true, Aerospike has breached 'high-water-[disk|memory]-pct' for this namespace. Replaced with hwm_breached as of version 3.9.

Additional information

Example:

IF hwm-breached is true,
THEN alert your operations group that memory or disk resources are strained. This condition might indicate the need to increase cluster capacity.

hwm_breached

[instantaneous][boolean]
Location:

Namespace

Monitoring:

alert

Introduced:

3.9

If true, Aerospike has breached 'high-water-[disk|memory]-pct' for this namespace.

Additional information

Example:

IF hwm_breached is true,
THEN alert your operations group that memory or disk resources are strained. This condition might indicate the need to increase cluster capacity.

index-type.mount[ix].age

[enterprise][instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.3

Applies only to Enterprise Version configured to index-type flash. This shows the percentage of lifetime (total usage) claimed by OEM for underlying device. Value is -1 unless underlying device is NVMe and may exceed 100. 'ix' is the device index. For example, storage-engine.file[0]=/opt/aerospike/test0.dat and storage-engine.file[1]=/opt/aerospike/test2.dat for 2 files specified in the configuration.

index-used-bytes-memory

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Removed:

3.9

Amount of memory occupied by the index for this namespace. Replaced with memory_used_index_bytes as of version 3.9.

index_flash_alloc_bytes

[enterprise][instantaneous][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

5.6

Applies only to Enterprise Version configured with index-type flash. Total bytes allocated on the mount(s) for the primary index used by this namespace on this node. This statistic represents entire 4KiB chunks which have at least one element in use. Also available in the log on the index-flash-usage ticker entry.

index_flash_alloc_pct

[enterprise][instantaneous][integer]
Location:

Namespace

Monitoring:

alert

Introduced:

5.6

Applies only to Enterprise Edition configured with index-type flash. Percentage of the mount(s) allocated for the primary index used by this namespace on this node. Calculated as (index_flash_alloc_bytes / index-type.mounts-size-limit) * 100. This statistic represents entire 4KiB chunks which have at least one element in use. Also available in the log on the index-flash-usage ticker entry.

Additional information

Example:

IF index_flash_alloc_pct gets close to or above 100%,
THEN alert operations to review the sizing of the namespace.

index_flash_used_bytes

[enterprise][instantaneous][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

4.3

Applies only to Enterprise Version configured with index-type flash. Total bytes in-use on the mount(s) for the primary index used by this namespace on this node. This is the same value memory_used_index_bytes would have if the index were not persisted.

index_flash_used_pct

[enterprise][instantaneous][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

4.3

Applies only to Enterprise Edition configured with index-type flash. Percentage of the mount(s) in-use for the primary index used by this namespace on this node. Calculated as (index_flash_used_bytes / index-type.mounts-size-limit) * 100.

index_pmem_used_bytes

[enterprise][instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5

Applies only to Enterprise Version configured with index-type pmem. Total bytes in-use on the mount(s) for the primary index used by this namespace on this node. This is the same value memory_used_index_bytes would have if the index were not persisted.

index_pmem_used_pct

[enterprise][instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5

Applies only to Enterprise Edition configured with index-type pmem. Percentage of the mount(s) in-use for the primary index used by this namespace on this node. Calculated as (index_pmem_used_bytes / index-type.mounts-size-limit) * 100

ldt_delete_success

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Number of successful LDT delete operations.

ldt_deletes

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Number of LDT delete operations.

ldt_errors

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Number of LDT errors.

ldt_read_success

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Number of successful LDT read operations.

ldt_reads

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Number of LDT read operations.

ldt_updates

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Number of LDT update operations.

ldt_write_success

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Number of successful LDT write operations.

ldt_writes

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Number of LDT write operations.

master-objects

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Removed:

3.9

Number of records on this node which are active masters. Replaced by master_objects as of version 3.9.

master-sub-objects

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Removed:

3.9

Number of LDT sub-records on this node which are active masters. Replaced by master_sub_objects as of version 3.9.

master_objects

[instantaneous][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.9

Number of records on this node which are active masters.

master_sub_objects

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Removed:

3.14

Number of LDT sub-records on this node which are active masters.

master_tombstones

[enterprise][instantaneous][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.10

Number of tombstones on this node which are active masters.

max-evicted-ttl

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Removed:

Yes

The highest record TTL that Aerospike has evicted from this namespace.

max-void-time

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Removed:

3.9

Maximum record TTL ever inserted into this namespace. Replaced by max_void_time as of version 3.9.

max_void_time

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Maximum record TTL ever inserted into this namespace.

memory_free_pct

[instantaneous][integer]
Location:

Namespace

Monitoring:

alert

Introduced:

3.9

Percentage of memory capacity free for this namespace.

Additional information

Example:

IF memory_free_pct approaches the configured value for high-water-memory-pct or stop-writes-pct,
THEN alert operations to investigate the cause. Might indicate a need to reduce the object count or increase capacity and may require further investigation into memory_used_sindex_bytes if secondary indexes are in use, into memory_used_set_index_bytes if set indexes are used, or into heap_efficiency_pct if data is stored in memory.

memory_used_bytes

[instantaneous][integer]
Location:

Namespace

Monitoring:

alert

Introduced:

3.9

Total bytes of memory used by this namespace on this node. This is the metric that is used against the high-water-memory-pct and stop-writes-pct thresholds. It represents the sum of the following values:
memory_used_data_bytes
memory_used_index_bytes
memory_used_set_index_bytes (version 5.6+)
memory_used_sindex_bytes

Refer to heap_allocated_kbytes for the total amount of memory allocated on a node (other than primary index shared memory in Enterprise Edition).

Additional information

Example:

Trending used-bytes-memory provides operations insight into how memory usage changes over time for this namespace.

memory_used_data_bytes

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Amount of memory occupied by data. Refer to memory_used_bytes for the total memory accounted for the namespace.

memory_used_index_bytes

[instantaneous][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.9

Amount of memory occupied by the index for this namespace. This will be allocated in shared memory by default (index-type shmem) for the Enterprise Edition.
If your index is persisted, either in block storage (index-type flash, server 4.3 and above) or in persistent memory (index-type pmem, server 4.5 and above), refer instead to index_flash_used_bytes or index_pmem_used_bytes. For these persisted index configurations, the value of memory_used_index_bytes will be 0.
Refer to memory_used_bytes for the total memory accounted for the namespace.

memory_used_set_index_bytes

[instantaneous][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

5.6

Amount of memory occupied by set indexes for this namespace on this node. Refer to memory_used_bytes for the total memory accounted for the namespace.

memory_used_sindex_bytes

[instantaneous][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.9

Amount of memory occupied by secondary indexes for this namespace on this node. Refer to memory_used_bytes for the total memory accounted for the namespace.

migrate-record-receives

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.8.3

Removed:

3.9

Number of record insert request received by immigration.

migrate-record-retransmits

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.8.3

Removed:

3.9

Number of times emigration has retransmitted records.

migrate-records-skipped

[enterprise][cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.8.3

Removed:

3.9

Number of times emigration did not ship a record because the remote node was already up-to-date.

migrate-records-transmitted

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.8.3

Removed:

3.9

Number of records emigration has read and sent.

migrate-rx-instance-count

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.8.3

Removed:

3.9

Replaced with migrate_rx_instance_count in version 3.9. Number of instance objects managing immigrations. Previous to version 3.8.3, this was called migrate_rx_objs and was under server statistics referring to number of partitions currently migrating to this node.

migrate-rx-partitions-active

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.8.3

Removed:

3.9

Replaced with migrate_rx_partitions_active in version 3.9. Number of partitions currently immigrating to this node. If migrate-rx-partitions-active is greater than 0 and cluster is not in maintenance, Operations needs to identify why migrations are running. Previous to version 3.8.3, this was called migrate_progress_recv and was under server statistics.

migrate-rx-partitions-initial

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.7.0

Removed:

3.9

Total number of migrations this node will receive during the current migration cycle for this namespace.

migrate-rx-partitions-remaining

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.7.0

Removed:

3.9

Number of migrations this node has not yet received during the current migration cycle for this namespace.

migrate-tx-instance-count

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.8.3

Removed:

3.9

Replaced with migrate_tx_instance_count in version 3.9. Number of instance objects managing emigrations. Previous to version 3.8.3, this was called migrate_tx_objs and was under server statistics referring to number of partitions pending migration out of this node.

migrate-tx-partitions-active

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.8.3

Removed:

3.9

Replaced with migrate_tx_partitions_active in version 3.9. Number of partitions currently emigrating from this node. If migrate-tx-partitions-active is greater than 0 and cluster is not in maintenance, Operations needs to identify why migrations are running. Previous to version 3.8.3, this was called migrate_progress_send and was under server statistics.

migrate-tx-partitions-imbalance

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.7.0

Removed:

3.9

Number of partition migrations failures which could lead to partitions being imbalanced. For each increment there will also be a warning logged.

migrate-tx-partitions-initial

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.7.0

Removed:

3.9

Total number of migrations this node will send during the current migration cycle for this namespace.

migrate-tx-partitions-remaining

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.7.0

Removed:

3.9

Number of migrations this node not yet sent during the current migration cycle for this namespace.

migrate_record_receives

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of record insert request received by immigration.

migrate_record_retransmits

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of times emigration has retransmitted records.

migrate_records_skipped

[enterprise][cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of times emigration did not ship a record because the remote node was already up-to-date.

migrate_records_transmitted

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of records emigration has read and sent.

migrate_rx_instance_count

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of instance objects managing immigrations. Previous to version 3.8.3, this was called migrate_rx_objs and was under server statistics referring to number of partitions currently migrating to this node.

migrate_rx_partitions_active

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of partitions currently immigrating to this node. If migrate_rx_partitions_active is greater than 0 and cluster is not in maintenance, Operations needs to identify why migrations are running. Previous to version 3.8.3, this was called migrate_progress_recv and was under server statistics.

migrate_rx_partitions_initial

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Total number of migrations this node will receive during the current migration cycle for this namespace.

migrate_rx_partitions_remaining

[instantaneous][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.9

Number of migrations this node has not yet received during the current migration cycle for this namespace.

migrate_signals_active

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.13.0

For finished partition migrations on this node, number of outstanding clean-up signals, sent to participating member nodes, waiting for clean-up acknowledgment. Signals are messages that are sent from a partition's master node to all other nodes that currently have data for the partition. The signals are used to notify all nodes that migrations have completed for this partitions and if they aren't a replica they can now drop the partition.

migrate_signals_remaining

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.13.0

For unfinished partition migrations on this node, number of clean-up signals to send to participating member nodes, as migration completes. Signals are messages that are sent from a partition's master node to all other nodes that currently have data for the partition. The signals are used to notify all nodes that migrations have completed for this partitions and if they aren't a replica they can now drop the partition.

migrate_tx_instance_count

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of instance objects managing emigrations. Previous to version 3.8.3, this was called migrate_tx_objs and was under server statistics referring to number of partitions pending migration out of this node.

migrate_tx_partitions_active

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of partitions currently emigrating from this node. If migrate_tx_partitions_active is greater than 0 and cluster is not in maintenance, Operations needs to identify why migrations are running. Previous to version 3.8.3, this was called migrate_progress_send and was under server statistics.

migrate_tx_partitions_imbalance

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of partition migrations failures which could lead to partitions being imbalanced. For each increment there will also be a warning logged.

migrate_tx_partitions_initial

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Total number of migrations this node will send during the current migration cycle for this namespace.

migrate_tx_partitions_lead_remaining

[enterprise][instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.3.1

Number of initially scheduled emigrations which are not delayed by the migrate-fill-delay configuration. Lead migrations are typically delta-migrations addressing non-empty partition replica nodes. Delta-migrations generally consume far less storage IO.

migrate_tx_partitions_remaining

[instantaneous][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.9

Number of migrations this node not yet sent during the current migration cycle for this namespace.

min-evicted-ttl

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Removed:

3.3.3

Minimum record TTL ever inserted into this namespace.

n_nodes_quiesced

[enterprise][instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.3.1

Removed:

4.4

Renamed to nodes_quiesced as of version 4.4. The number of nodes observed to be quiesced as of the most recent reclustering event. If a single node received the quiesce command, on the subsequent reclustering event, all nodes return 1 for this metric, and when the quiesced node is shutdown, triggering a new reclustering event, this metric returns to 0.

nodes_quiesced

[enterprise][instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.4

The number of nodes observed to be quiesced as of the most recent reclustering event. If a single node received the quiesce command, on the subsequent reclustering event, all nodes return 1 for this metric, and when the quiesced node is shutdown, triggering a new reclustering event, this metric returns to 0.

non-expirable-objects

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Removed:

3.9

Number of records in this namespace with non-expirable TTLs (TTLs of value 0).

non_expirable_objects

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of records in this namespace with non-expirable TTLs (TTLs of value 0). Replaced by non_expirable_objects as of version 3.9.

non_replica_objects

[instantaneous][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.13

Number of records on this node which are neither master nor replicas. This number is non-zero only during migration, representing additional versions or copies of records. Those are records beyond the replication factor line and would be potentially used during migrations to duplicate resolve.

non_replica_sub_objects

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.13

Removed:

3.14

Number of LDT sub-records on this node which are neither master nor replicas. This number is non-zero only during migration.

non_replica_tombstones

[enterprise][instantaneous][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.13

Number of tombstones on this node which are neither master nor replicas. This number is non-zero only during migration.

nsup-cycle-duration

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Removed:

3.9

Length of the last nsup cycle in seconds. Replaced with nsup_cycle_duration as of version 3.9.

nsup-cycle-sleep-pct

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Removed:

3.9

Percent time spent sleeping in the last nsup cycle. Replaced with nsup_cycle_sleep_pct as of version 3.9.

nsup_cycle_duration

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Length of the last nsup cycle in seconds.

nsup_cycle_sleep_pct

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Removed:

4.5.1

Percent time spent sleeping in the last nsup cycle.

objects

[instantaneous][integer]
Location:

Namespace

Monitoring:

watch

Number of records in this namespace for this node. Does not include tombstones.

Additional information

Example:

Trending objects provides operations insight into this namespace's record fluctuations over time.

ops_sub_tsvc_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.7

Number of ops scan/query subtransactions that failed with an error in the transaction service. For example, protocol or permission errors. Does not include timeouts. In strong-consistency enabled namespaces, this will include transactions against unavailable_partitions and dead_partitions.

ops_sub_tsvc_timeout

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.7

Number of ops scan/query subtransactions that timed out in the transaction service.

ops_sub_write_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.7

Number of ops scan/query write subtransactions that failed with an error. Does not include timeouts.

ops_sub_write_filtered_out

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.7

Number of ops scan/query write subtransactions for which the write did not happen because the record was filtered out with Filter Expressions.

ops_sub_write_success

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.7

Number of successful ops scan/query write subtransactions.

ops_sub_write_timeout

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.7

Number of ops scan/query write subtransactions that timed out.

pending_quiesce

[enterprise][instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.3.1

Reports 'true' when the quiesce info command has been received by a node, or if stay-quiesced is true for the node. When true, the next clustering event will cause this node to quiesce. To trigger a clustering event, issue the recluster info command. To disable, issue the quiesce-undo info command.

pi_query_aggr_abort

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

6.0

Number of primary index query aggregations that were aborted.

pi_query_aggr_complete

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

6.0

Number of primary index query aggregations that completed.

pi_query_aggr_error

[cumulative][integer]
Location:

Namespace

Monitoring:

alert

Introduced:

6.0

Number of primary index query aggregations that failed.

Additional information

Example:

Compare pi_query_aggr_error to pi_query_aggr_complete.

IF ratio is higher than acceptable,
THEN alert operations to investigate.

pi_query_long_basic_abort

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

6.0

Number of basic long primary index queries that were aborted.

pi_query_long_basic_complete

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

6.0

Number of basic long primary index queries that completed.

pi_query_long_basic_error

[cumulative][integer]
Location:

Namespace

Monitoring:

alert

Introduced:

6.0

Number of basic long primary index queries that failed.

Additional information

Example:

Compare pi_query_long_basic_error to pi_query_long_basic_complete.

IF ratio is higher than acceptable,
THEN alert operations to investigate.

pi_query_long_basic_error

[cumulative][integer]
Location:

Namespace

Monitoring:

alert

Introduced:

6.0

Number of basic queries that failed.

Additional information

Example:

Compare pi_query_long_basic_error to pi_query_long_basic_complete.

IF ratio is higher than acceptable,
THEN alert operations to investigate.

pi_query_ops_bg_abort

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

6.0

Number of ops background primary index queries that were aborted.

pi_query_ops_bg_complete

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

6.0

Number of ops background primary index queries that completed.

pi_query_ops_bg_error

[cumulative][integer]
Location:

Namespace

Monitoring:

alert

Introduced:

6.0

Number of ops background primary index queries that failed.

Additional information

Example:

Compare pi_query_ops_bg_error to pi_query_ops_bg_complete and IF ratio is higher than acceptable THEN alert operations to investigate.

pi_query_short_basic_complete

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

6.0

Number of basic short primary index queries that completed.

pi_query_short_basic_error

[cumulative][integer]
Location:

Namespace

Monitoring:

alert

Introduced:

6.0

Number of basic short primary index queries that failed.

Additional information

Example:

Compare scan_basic_error to scan_basic_complete.

IF ratio is higher than acceptable,
THEN alert operations to investigate. In server 6.0, use pi_query_long_basic_error.

pi_query_short_basic_error

[cumulative][integer]
Location:

Namespace

Monitoring:

alert

Introduced:

6.0

Number of basic queries that failed.

Additional information

Example:

Compare pi_query_short_basic_error to pi_query_short_basic_complete.

IF ratio is higher than acceptable,
THEN alert operations to investigate.

pi_query_short_basic_timeout

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

6.0

Short primary index queries are not monitored, so they cannot be aborted. They might time out, which is reflected in this statistic.

pi_query_udf_bg_abort

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

6.0

Number of udf background primary index queries that were aborted.

pi_query_udf_bg_complete

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

6.0

Number of udf background primary index queries that completed.

pi_query_udf_bg_error

[cumulative][integer]
Location:

Namespace

Monitoring:

alert

Introduced:

6.0

Number of udf background primary index queries that failed.

Additional information

Example:

Compare scan_udf_bg_error to scan_udf_bg_complete.

IF ratio is higher than acceptable,
THEN alert operations to investigate. In server 6.0, use pi_query_udf_bg_error.

pi_query_udf_bg_error

[cumulative][integer]
Location:

Namespace

Monitoring:

alert

Introduced:

6.0

Number of udf background queries that failed.

Additional information

Example:

Compare pi_query_udf_bg_error to pi_query_udf_bg_complete.

IF ratio is higher than acceptable,
THEN alert operations to investigate.

pmem_available_pct

[enterprise][instantaneous][integer]
Location:

Namespace

Monitoring:

alert

Introduced:

4.8

Measures the minimum contiguous pmem storage file space across all such files in a namespace. The namespace will be read only (stop writes) if this value falls below min-avail-pct. It is important for all configured pmem storage files in a namespace to have the same size, otherwise, the pmem_available_pct could be low even when a lot of space is available across other files.

Additional information

Not to be confused with pmem_free_pct which represents the amount of free space across all pmem storage files in a namespace and does not take account of the fragmentation.
Here is an example to represent the difference between pmem_free_pct and pmem_available_pct. Let's assume 5 files of 96MB each for a given namespace, where each file has 24MB of data that are spread across 6 write blocks (with an 8MB write-block-size):
- The pmem_free_pct would be 75%. - The pmem_available_pct would be 50%. - If the distribution is not uniform (it usually is not perfectly uniform) the pmem_available_pct would represent the file that has the least free blocks. Example:

IF pmem_available_pct drops below 20%,
THEN warn your operations group.

This condition might indicate that defrag is unable to keep up with the current load.

IF pmem_available_pct drops below 15%,
THEN critical ALERT.

If pmem_available_pct drops below 5%, usable PMEM resources are critically low. This condition might result in stop_writes.

pmem_compression_ratio

[enterprise][moving average][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

4.8

Measures the average compressed size to uncompressed size ratio for pmem storage. Thus 1.000 indicates no compression and 0.100 indicates a 1:10 compression ratio (90% reduction in size). Note that pmem_compression_ratio will not be included if the compression configuration parameter is set to none.

Additional information

The compression ratio is a moving average. It is calculated based on the most recently written records. Read records do not factor into the ratio. If the written data changes over time then the compression ratio will change with it. In case of a sudden change in data, the indicated compression ratio may lag behind a bit. As a rule of thumb, assume that the compression ratio covers the most recently written 100,000 to 1,000,000 records.

pmem_free_pct

[enterprise][instantaneous][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

4.8

Percentage of pmem storage capacity free for this namespace. This is the amount of free storage across all pmem storage files in the namespace. Evictions will be triggered when the used percentage across all storage files (which is represented by 100 - pmem_free_pct) crosses the configured high-water-disk-pct.

Additional information

Not to be confused with pmem_available_pct which represents the amount of free contiguous space on the pmem storage file that has the least contiguous free space across the namespace.
Here is an example to represent the difference between pmem_free_pct and pmem_available_pct. Let's assume 5 files of 96MB each for a given namespace, where each file has 24MB of data that are spread across 6 write blocks (with an 8MB write-block-size):
- The pmem_free_pct would be 75%. - The pmem_available_pct would be 50%. - If the distribution is not uniform (it usually is not perfectly uniform) the pmem_available_pct would represent the file that has the least free blocks.

pmem_total_bytes

[enterprise][instantaneous][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

4.8

Total bytes of pmem storage file space allocated to this namespace on this node.

pmem_used_bytes

[enterprise][instantaneous]
Location:

Namespace

Monitoring:

watch

Introduced:

4.8

Total bytes of pmem storage file space used by this namespace on this node.

Additional information

Example:

Trending pmem_used_bytes provides operations insight into how pmem storage usage changes over time for this namespace.

prole-objects

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Removed:

3.9

Number of records on this node which are proles (replicas) on this node. Replaced by prole_objects as of version 3.9.

prole-sub-objects

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Removed:

3.9

Number of LDT sub records on this node which are proles (replicas) on this node. Replaced by prole_sub_objects as of version 3.9.

prole_objects

[instantaneous][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.9

Number of records on this node which are proles (replicas) on this node. Does not include tombstones.

prole_sub_objects

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Removed:

3.14

Number of LDT sub records on this node which are proles (replicas) on this node.

prole_tombstones

[enterprise][instantaneous][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.10

Number of tombstones on this node which are proles (replicas) on this node.

query_agg

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Removed:

5.7

Number of query aggregations attempted. Removed in server version 5.7. Use query_aggr_complete + query_aggr_error + query_aggr_abort instead.

query_agg_abort

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Removed:

5.7

Number of query aggregations aborted by the user seen by this node. Renamed to query_aggr_abort in server version 5.7.

query_agg_avg_rec_count

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Removed:

5.7

Average number of records returned by the aggregations underlying query. Renamed to query_aggr_avg_rec_count in server version 5.7.

query_agg_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Removed:

5.7

Number of query aggregations errors due to an internal error. Renamed to query_aggr_error in server version 5.7.

query_agg_success

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Removed:

5.7

Number of query aggregations completed. Renamed to query_aggr_complete in server version 5.7..

query_aggr_abort

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

5.7

Removed:

6.0

Number of query aggregations aborted by the user seen by this node. In server 6.0, use si_query_aggr_abort.

query_aggr_avg_rec_count

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

5.7

Removed:

6.0

Average number of records returned by the aggregations underlying query.

query_aggr_complete

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

5.7

Removed:

6.0

Number of query aggregations completed. In server 6.0, use si_query_aggr_complete.

query_aggr_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

5.7

Removed:

6.0

Number of query aggregation errors due to an internal error. In server 6.0, use si_query_aggr_error.

query_basic_abort

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

5.7

Number of user aborted secondary index basic queries.

query_basic_avg_rec_count

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

5.7

Removed:

6.0

Average number of records returned by all secondary index basic queries.

query_basic_complete

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

5.7

Removed:

6.0

Number of secondary index basic queries which completed successfully.

query_basic_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

5.7

Number of secondary index basic queries which returned error.

query_fail

[cumulative]
Location:

Namespace

Monitoring:

watch

Introduced:

3.9

Removed:

6.0

Number of queries which failed due to an internal error. Those are failures not part of query lookup (see query_lookup_error), query aggregation (see query_agg_error) or query background udf (see query_udf_bg_failure).

query_false_positives

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

5.7

Removed:

6.0

Number of entries that were shortlisted from the secondary index but the bin values are not matching the query clause. This might happen when the bin value changes during query execution.

query_long_queue_full

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Removed:

6.0

Number of long running queries queue full errors.

query_long_reqs

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Removed:

6.0

Number of long running queries currently in process.

query_lookup_abort

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Removed:

5.7

Number of user aborted secondary index queries. Renamed to query_basic_abort in server version 5.7.

query_lookup_avg_rec_count

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Removed:

5.7

Average number of records returned by all secondary index query look-ups. Renamed to query_basic_avg_rec_count in server version 5.7.

query_lookup_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Removed:

5.7

Number of secondary index query look-up errors. Renamed to query_basic_error in server version 5.7.

query_lookup_success

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Removed:

5.7

Number of secondary index look-ups which succeeded. Renamed to query_basic_complete in server version 5.7.

query_lookups

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Removed:

5.7

Number of secondary index lookups attempted. Removed in server version 5.7. Use query_basic_complete + query_basic_error + query_basic_abort instead.

query_ops_bg_abort

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

5.7

Removed:

6.0:

Number of ops background queries that were aborted. In server 6.0, use si_query_ops_bg_abort.

query_ops_bg_complete

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

5.7

Removed:

6.0

Number of ops background queries that completed. In server 6.0, use si_query_ops_bg_complete.

query_ops_bg_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

5.7

Removed:

6.0

Number of ops background queries that returned error. In server 6.0, use si_query_ops_bg_error.

query_ops_bg_failure

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.7

Removed:

5.7

Number of ops background queries that failed. Removed from server version 5.7 onwards, use query_ops_bg_error + query_ops_bg_abort instead.

query_ops_bg_success

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.7

Removed:

5.7

Number of ops background queries that completed. Renamed to query_ops_bg_complete in server version 5.7.

query_proto_compression_ratio

[enterprise][moving average][decimal]
Location:

Namespace

Monitoring:

optional

Introduced:

4.8

Measures the average compressed size to uncompressed size ratio for protocol message data in query responses to the client. Thus 1.000 indicates no compression and 0.100 indicates a 1:10 compression ratio (90% reduction in size).

Additional information

The compression ratio is a moving average. It is calculated based on the most recent client responses. If the response message data changes over time then the compression ratio will change with it. In case of a sudden change in response data, the indicated compression ratio may lag behind a bit. As a rule of thumb, assume that the compression ratio covers the most recent 100,000 to 1,000,000 client responses.

query_proto_uncompressed_pct

[enterprise][instantaneous][instantaneous]
Location:

Namespace

Monitoring:

optional

Introduced:

4.8

Measures the percentage of query responses to the client with uncompressed protocol message data. Thus 0.000 indicates all responses with compressed data, and 100.000 indicates no responses with compressed data. For example, if protocol message data compression is not used, this metric will remain set to 0.000. If protocol message data compression is then turned on and all responses are compressed, this metric will remain set to 0.000. The only way this metric will ever be set to a value different than 0.000 is if compression is used, but some responses are not compressed (which happens when the uncompressed size is so small that the server does not try to compress, or when the compression fails).

Additional information

The percentage is a moving average. It is calculated based on the most recent client responses. If the response message data changes over time then the percentage will change with it. In case of a sudden change in response data, the indicated percentage may lag behind a bit. As a rule of thumb, assume that the percentage covers the most recent 100,000 to 1,000,000 client responses.

query_reqs

[cumulative]
Location:

Namespace

Monitoring:

watch

Introduced:

3.9

Removed:

6.0

Number of query requests ever attempted on this node. Even very early failures would be counted here, as opposed to query_short_running and query_long_running which would increment a bit later.

query_short_queue_full

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Removed:

6.0

Number of short running queries queue full errors.

query_short_reqs

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Removed:

6.0

Number of short running queries currently in process.

query_udf_bg_abort

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

5.7

Removed:

6.0

Number of udf background queries that were aborted. In server 6.0, use si_query_udf_bg_abort.

query_udf_bg_complete

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

5.7

Removed:

6.0

Number of udf background queries that completed. In server 6.0, use si_query_udf_bg_complete.

query_udf_bg_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

5.7

Removed:

6.0

Number of udf background queries which returned error. In server 6.0, use si_query_udf_bg_error.

query_udf_bg_failure

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Removed:

5.7

Number of udf background queries that failed. Removed from server version 5.7 onwards, use query_udf_bg_error + query_udf_bg_abort instead.

query_udf_bg_success

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Removed:

5.7

Number of udf background queries that completed. Renamed to query_udf_bg_complete in server version 5.7.

re_repl_error

[enterprise][cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.0

Number of re-replication errors which were not timeout. Re-replications would happen for namespaces operating under the strong-consistency mode when a record does not successfully replicate on the initial attempt.

re_repl_success

[enterprise][cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.0

Number of successful re-replications. Re-replications would happen for namespaces operating under the strong-consistency mode when a record does not successfully replicate on the initial attempt.

re_repl_timeout

[enterprise][cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.0

Number of re-replications that ended in timeout. Re-replications would happen for namespaces operating under the strong-consistency mode when a record does not successfully replicate on the initial attempt.

record_proto_compression_ratio

[enterprise][instantaneous][decimal]
Location:

Namespace

Monitoring:

optional

Introduced:

4.8

Measures the average compressed size to uncompressed size ratio for protocol message data in single-record transaction client responses. Thus 1.000 indicates no compression and 0.100 indicates a 1:10 compression ratio (90% reduction in size).

Additional information

The compression ratio is a moving average. It is calculated based on the most recent client responses. If the response message data changes over time then the compression ratio will change with it. In case of a sudden change in response data, the indicated compression ratio may lag behind a bit. As a rule of thumb, assume that the compression ratio covers the most recent 100,000 to 1,000,000 client responses.

record_proto_uncompressed_pct

[enterprise][moving average][decimal]
Location:

Namespace

Monitoring:

optional

Introduced:

4.8

Measures the percentage of single-record transaction client responses with uncompressed protocol message data. Thus 0.000 indicates all responses with compressed data, and 100.000 indicates no responses with compressed data. For example, if protocol message data compression is not used, this metric will remain set to 0.000. If protocol message data compression is then turned on and all responses are compressed, this metric will remain set to 0.000. The only way this metric will ever be set to a value different than 0.000 is if compression is used, but some responses are not compressed (which happens when the uncompressed size is so small that the server does not try to compress, or when the compression fails).

Additional information

The percentage is a moving average. It is calculated based on the most recent client responses. If the response message data changes over time then the percentage will change with it. In case of a sudden change in response data, the indicated percentage may lag behind a bit. As a rule of thumb, assume that the percentage covers the most recent 100,000 to 1,000,000 client responses.

retransmit_all_batch_sub_dup_res

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of retransmits that occurred during batch subtransactions that were being duplicate resolved. Note this includes retransmits originating on the client as well as proxying nodes.

retransmit_all_delete_dup_res

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of retransmits that occurred during delete transactions that were being duplicate resolved. Note this includes retransmits originating on the client as well as proxying nodes.

retransmit_all_delete_repl_write

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of retransmits that occurred during delete transactions that were being replica written. Note this includes retransmits originating on the client as well as proxying nodes.

retransmit_all_read_dup_res

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of retransmits that occurred during read transactions that were being duplicate resolved. Note this includes retransmits originating on the client as well as proxying nodes.

retransmit_all_udf_dup_res

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of retransmits that occurred during client initiated udf transactions that were being duplicate resolved. Note this includes retransmits originating on the client as well as proxying nodes.

retransmit_all_udf_repl_write

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of retransmits that occurred during client initiated udf transactions that were being replica written. Note this includes retransmits originating on the client as well as proxying nodes.

retransmit_all_write_dup_res

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of retransmits that occurred during write transactions that were being duplicate resolved. Note this includes retransmits originating on the client as well as proxying nodes.

retransmit_all_write_repl_write

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of retransmits that occurred during write transactions that were being replica written. Note this includes retransmits originating on the client as well as proxying nodes.

retransmit_batch_sub_dup_res

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.10.1

Removed:

4.5.1

Number of retransmits that occurred during batch subtransactions that were being duplicate resolved. Replaced with retransmit_all_batch_sub_dup_res as of version 4.5.1.

retransmit_client_delete_dup_res

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.10.1

Removed:

4.5.1

Number of retransmits that occurred during delete transactions that were being duplicate resolved. Replaced with retransmit_all_delete_dup_res as of version 4.5.1.

retransmit_client_delete_repl_write

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.10.1

Removed:

4.5.1

Number of retransmits that occurred during delete transactions that were being replica written. Replaced with retransmit_all_delete_repl_write as of version 4.5.1.

retransmit_client_read_dup_res

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.10.1

Removed:

4.5.1

Number of retransmits that occurred during read transactions that were being duplicate resolved. Replaced with retransmit_all_read_dup_res as of version 4.5.1.

retransmit_client_udf_dup_res

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.10.1

Removed:

4.5.1

Number of retransmits that occurred during client initiated udf transactions that were being duplicate resolved. Replaced with retransmit_all_udf_dup_res as of version 4.5.1.

retransmit_client_udf_repl_write

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.10.1

Removed:

4.5.1

Number of retransmits that occurred during client initiated udf transactions that were being replica written. Replaced with retransmit_all_udf_repl_write as of version 4.5.1.

retransmit_client_write_dup_res

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.10.1

Removed:

4.5.1

Number of retransmits that occurred during write transactions that were being duplicate resolved. Replaced with retransmit_all_write_dup_res as of version 4.5.1.

retransmit_client_write_repl_write

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.10.1

Removed:

4.5.1

Number of retransmits that occurred during write transactions that were being replica written. Replaced with retransmit_all_write_repl_write as of version 4.5.1.

retransmit_nsup_repl_write

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.10.1

Number of retransmits that occurred during nsup initiated delete transactions that were being replica written.

retransmit_ops_sub_dup_res

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.7

Number of retransmits that occurred during write subtransactions of ops scan/query jobs that were being duplicate resolved.

retransmit_ops_sub_repl_write

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.7

Number of retransmits that occurred during write subtransactions of ops scan/query jobs that were being replica written.

retransmit_udf_sub_dup_res

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.10.1

Number of retransmits that occurred during udf subtransactions of scan/query background udf jobs that were being duplicate resolved.

retransmit_udf_sub_repl_write

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.10.1

Number of retransmits that occurred during udf subtransactions of scan/query background udf jobs that were being replica written.

scan_aggr_abort

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.9

Removed:

6.0

Number of scan aggregations that were aborted. In server 6.0, use pi_query_aggr_abort.

scan_aggr_complete

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.9

Removed:

6.0

Number of scan aggregations that completed. In server 6.0, use pi_query_aggr_complete.

scan_aggr_error

[cumulative][integer]
Location:

Namespace

Monitoring:

alert

Introduced:

3.9

Removed:

6.0

Number of scan aggregations that failed.

Additional information

Example:

Compare scan_aggr_error to scan_aggr_complete.

IF ratio is higher than acceptable,
THEN alert operations to investigate. In server 6.0, use pi_query_aggr_error.

scan_basic_abort

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.9

Removed:

6.0

Number of basic scans that were aborted. In server 6.0, use pi_query_long_basic_abort.

scan_basic_complete

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.9

Removed:

6.0

Number of basic scans that completed. In server 6.0, use pi_query_long_basic_complete.

scan_basic_error

[cumulative][integer]
Location:

Namespace

Monitoring:

alert

Introduced:

3.9

Removed:

6.0

Number of basic scans that failed.

Additional information

Example:

Compare scan_basic_error to scan_basic_complete.

IF ratio is higher than acceptable,
THEN alert operations to investigate. In server 6.0, use pi_query_long_basic_error.

scan_ops_bg_abort

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

4.7

Removed:

6.0

Number of ops background scans that were aborted. In server 6.0, use pi_query_ops_bg_abort.

scan_ops_bg_complete

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

4.7

Removed:

6.0

Number of ops background scans that completed. In server 6.0, use pi_query_ops_bg_complete.

scan_ops_bg_error

[cumulative][integer]
Location:

Namespace

Monitoring:

alert

Introduced:

4.7

Removed:

6.0

Number of ops background scans that failed.

Additional information

Example:

Compare scan_ops_bg_error to scan_ops_bg_complete and IF ratio is higher than acceptable THEN alert operations to investigate. In server 6.0, use pi_query_ops_bg_error.

scan_proto_compression_ratio

[enterprise][moving average][decimal]
Location:

Namespace

Monitoring:

optional

Introduced:

4.8

Removed:

6.0

Measures the average compressed size to uncompressed size ratio for protocol message data in basic scan or aggregation scan client responses. Thus 1.000 indicates no compression and 0.100 indicates a 1:10 compression ratio (90% reduction in size).

Additional information

The compression ratio is a moving average. It is calculated based on the most recent client responses. If the response message data changes over time then the compression ratio will change with it. In case of a sudden change in response data, the indicated compression ratio may lag behind a bit. As a rule of thumb, assume that the compression ratio covers the most recent 100,000 to 1,000,000 client responses.

scan_proto_uncompressed_pct

[enterprise][instantaneous][decimal]
Location:

Namespace

Monitoring:

optional

Introduced:

4.8

Removed:

6.0

Measures the percentage of basic scan or aggregation scan client responses with uncompressed protocol message data. Thus 0.000 indicates all responses with compressed data, and 100.000 indicates no responses with compressed data. For example, if protocol message data compression is not used, this metric will remain set to 0.000. If protocol message data compression is then turned on and all responses are compressed, this metric will remain set to 0.000. The only way this metric will ever be set to a value different than 0.000 is if compression is used, but some responses are not compressed (which happens when the uncompressed size is so small that the server does not try to compress, or when the compression fails).

Additional information

The percentage is a moving average. It is calculated based on the most recent client responses. If the response message data changes over time then the percentage will change with it. In case of a sudden change in response data, the indicated percentage may lag behind a bit. As a rule of thumb, assume that the percentage covers the most recent 100,000 to 1,000,000 client responses.

scan_udf_bg_abort

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.9

Removed:

6.0

Number of udf background scans that were aborted. In server 6.0, use pi_query_udf_bg_abort.

scan_udf_bg_complete

[cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.9

Removed:

6.0

Number of udf background scans that completed. In server 6.0, use  pi_query_udf_bg_complete.

scan_udf_bg_error

[cumulative][integer]
Location:

Namespace

Monitoring:

alert

Introduced:

3.9

Removed:

6.0

Number of udf background scans that failed.

Additional information

Example:

Compare scan_udf_bg_error to scan_udf_bg_complete.

IF ratio is higher than acceptable,
THEN alert operations to investigate. In server 6.0, use pi_query_udf_bg_error.

set-deleted-objects

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Removed:

3.9

Number of records deleted by a set. Renamed to set_deleted_objects as of version 3.9.

set-evicted-objects

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Removed:

Yes

Number of records evicted by a set.

set_deleted_objects

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of records deleted by a set.

si_query_aggr_abort

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of secondary index query aggregations aborted by the user seen by this node.

si_query_aggr_complete

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of secondary index query aggregations completed.

si_query_aggr_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of secondary index query aggregation errors due to an internal error.

si_query_ops_bg_abort

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of ops background secondary index queries that were aborted.

si_query_ops_bg_complete

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of ops background secondary index queries that completed.

si_query_ops_bg_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of ops background secondary index queries that returned error.

si_query_udf_bg_abort

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of udf background secondary index queries that were aborted.

si_query_udf_bg_complete

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of udf background secondary index queries that completed.

si_query_udf_bg_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

6.0

Number of udf background secondary index queries which returned error.

sindex-used-bytes-memory

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Removed:

3.9

Amount of memory occupied by secondary indexes for this namespace on this node. Replaced with memory_used_sindex_bytes as of version 3.9.

sindex_gc_cleaned

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

5.7

Number of secondary index entries cleaned by sindex GC.

smd_evict_void_time

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

The cluster-wide specified eviction depth, expressed as a void time in seconds since 1 January 2010 UTC. This is distributed to all nodes via SMD. This may be larger than evict_void_time -- evict_void_time will eventually advance to this value.

stop-writes

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Removed:

3.9

If true this namespace is currently not allowing writes. Replaced with stop_writes as of version 3.9.

Additional information

Example:

IF stop-writes is true,
THEN critical ALERT.

Until the cause is corrected, the system will reject all writes.

stop_writes

[instantaneous][integer]
Location:

Namespace

Monitoring:

alert

Introduced:

3.9

If true this namespace is currently not allowing writes. Error code 22 will be returned. Note that migration writes as well as prole writes will still be allowed. Only client originated writes will be denied. This will happen if either one of the following is breached: min-avail-pct, stop-writes-pct or xdr-min-digestlog-free-pct.

Additional information

Example:

IF stop-writes is true,
THEN critical ALERT.

Until the cause is corrected, the system will reject all writes.

storage-engine.device[ix].age

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.3

Shows percentage of lifetime (total usage) claimed by OEM for underlying device (may exceed 100). Value will be -1 unless underlying device is NVMe. 'ix' is the device index. For example, storage-engine.device[0]=/dev/xvd1 and storage-engine.device[1]=/dev/xvc1 for 2 devices specified in the configuration. It is a measure of how much of the drive's projected lifetime according to the manufacturer has been used at any point in time. When the SSD is brand new, its value will report '0' and when its projected lifetime has been reached, it shows '100', reporting that 100% of the projected lifetime has been used. When the value gets over 100%, the SSD has reached the lifetime specified by the OEM.

storage-engine.device[ix].defrag_q

[instantaneous][integer]
Location:

Namespace

Monitoring:

Alert

Introduced:

4.3

Number of wblocks queued to be defragged on device[ix]. 'ix' is the device index. For example, storage-engine.device[0]=/dev/xvd1 and storage-engine.device[1]=/dev/xvc1 for 2 devices specified in the configuration.

Additional information

Example:

Measured per-device or per-file depending on the storage configuration.

IF storage-engine.device[ix].defrag_q or storage-engine.file[ix].defrag_q continues to increase over time,
THEN alert operations to investigate the cause.

storage-engine.device[ix].defrag_reads

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.3

Number of wblocks defrag has read from device[ix]. 'ix' is the device index. For example, storage-engine.device[0]=/dev/xvd1 and storage-engine.device[1]=/dev/xvc1 for 2 devices specified in the configuration.

storage-engine.device[ix].defrag_writes

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.3

Number of wblocks defrag has written to device[ix]. 'ix' is the device index. For example, storage-engine.device[0]=/dev/xvd1 and storage-engine.device[1]=/dev/xvc1 for 2 devices specified in the configuration.

storage-engine.device[ix].free_wblocks

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.3

Number of wblocks (write blocks) free on device[ix]. 'ix' is the device index. For example, storage-engine.device[0]=/dev/xvd1 and storage-engine.device[1]=/dev/xvc1 for 2 devices specified in the configuration.

storage-engine.device[ix].shadow_write_q

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.3

Number of wblocks queued to be written to the shadow device of device[ix]. 'ix' is the device index. For example, storage-engine.device[0]=/dev/xvd1 and storage-engine.device[1]=/dev/xvc1 for 2 devices specified in the configuration. This statistic is not available in the log ticker.

storage-engine.device[ix].used_bytes

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.3

Number of bytes used for data on device[ix]. 'ix' is the device index. For example, storage-engine.device[0]=/dev/xvd1 and storage-engine.device[1]=/dev/xvc1 for 2 devices specified in the configuration.

storage-engine.device[ix].write_q

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.3

Number of wblocks queued to be written to device[ix]. 'ix' is the device index. For example, storage-engine.device[0]=/dev/xvd1 and storage-engine.device[1]=/dev/xvc1 for 2 devices specified in the configuration. Includes blocks written by the defragmentation sub-system.

storage-engine.device[ix].writes

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.3

Number of wblocks written to device[ix] since Aerospike started. 'ix' is the device index. For example, storage-engine.device[0]=/dev/xvd1 and storage-engine.device[1]=/dev/xvc1 for 2 devices specified in the configuration. Includes defragmentation writes.

storage-engine.file[ix].age

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.3

Shows percentage of lifetime (total usage) claimed by OEM for underlying device. Value will be -1 unless underlying device is NVMe and may exceed 100. 'ix' is the file index. For example, storage-engine.file[0]=/opt/aerospike/test0.dat and storage-engine.file[1]=/opt/aerospike/test2.dat for 2 files specified in the configuration.

storage-engine.file[ix].defrag_q

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.3

Number of wblocks queued to be defragged on file[ix]. 'ix' is the file index. For example, storage-engine.file[0]=/opt/aerospike/test0.dat and storage-engine.file[1]=/opt/aerospike/test2.dat for 2 files specified in the configuration.

storage-engine.file[ix].defrag_reads

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.3

Number of wblocks defrag has read from file[ix]. 'ix' is the file index. For example, storage-engine.file[0]=/opt/aerospike/test0.dat and storage-engine.file[1]=/opt/aerospike/test2.dat for 2 files specified in the configuration.

storage-engine.file[ix].defrag_writes

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.3

Number of wblocks defrag has written to file[ix]. 'ix' is the file index. For example, storage-engine.file[0]=/opt/aerospike/test0.dat and storage-engine.file[1]=/opt/aerospike/test2.dat for 2 files specified in the configuration.

storage-engine.file[ix].free_wblocks

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.3

Number of wblocks (write blocks) free on file[ix]. 'ix' is the file index. For example, storage-engine.file[0]=/opt/aerospike/test0.dat and storage-engine.file[1]=/opt/aerospike/test2.dat for 2 files specified in the configuration.

storage-engine.file[ix].shadow_write_q

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.3

Number of wblocks queued to be written to the shadow file of file[ix]. 'ix' is the file index. For example, storage-engine.file[0]=/opt/aerospike/test0.dat and storage-engine.file[1]=/opt/aerospike/test2.dat for 2 files specified in the configuration. This statistic is not available in the log ticker.

storage-engine.file[ix].used_bytes

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.3

Number of bytes used for data on file[ix]. 'ix' is the file index. For example, storage-engine.file[0]=/opt/aerospike/test0.dat and storage-engine.file[1]=/opt/aerospike/test2.dat for 2 files specified in the configuration.

storage-engine.file[ix].write_q

[instantaneous][integer]
Location:

Namespace

Monitoring:

Alert

Introduced:

4.3

Number of wblocks queued to be written to file[ix]. 'ix' is the file index. For example, storage-engine.file[0]=/opt/aerospike/test0.dat and storage-engine.file[1]=/opt/aerospike/test2.dat for 2 files specified in the configuration.

Additional information

Example:

Measured per-device or per-file depending on the storage configuration.

IF storage-engine.device[ix].write_q or storage-engine.file[ix].write_q is greater than 1,
THEN alert operations to investigate the cause.

storage-engine.file[ix].writes

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.3

Number of wblocks written to file[ix] since Aerospike started. 'ix' is the file index. For example, storage-engine.file[0]=/opt/aerospike/test0.dat and storage-engine.file[1]=/opt/aerospike/test2.dat for 2 files specified in the configuration. When running with commit-to-device set to true, this counter will only account for full blocks written and therefore will only count blocks written through the defragmentation process as client writes would write to disk individually rather than at a block level. Includes defragmentation writes.

sub_objects

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of LDT sub objects. Also aggregated at the service statistic level under the same name.

tombstones

[enterprise][instantaneous][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

3.10

Total number tombstones in this namespace on this node.

total-bytes-disk

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Removed:

3.9

Total bytes of disk space allocated to this namespace on this node. Refer to the device_total_bytes stat as of 3.9.

total-bytes-memory

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Removed:

3.9

Total bytes of memory allocated to this namespace on this node. Refer to the memory-size configuration parameter.

truncate_lut

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.12

'The most covering truncate_lut for this namespace. See truncate or truncate-namespace.'

truncated_records

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.12

The total number of records deleted by truncation for this namespace (includes set truncations). See truncate or truncate-namespace.

udf_sub_lang_delete_success

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of successful udf delete sub-transactions (for scan/query background udf jobs). Refer to the udf_sub_udf_complete, udf_sub_udf_error, udf_sub_udf_filtered_out, udf_sub_udf_timeout statistics for the containing udf operation statuses.

udf_sub_lang_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of udf sub-transactions errors (for scan/query background udf jobs). Refer to the udf_sub_udf_complete, udf_sub_udf_error, udf_sub_udf_filtered_out, udf_sub_udf_timeout statistics for the containing udf operation statuses.

udf_sub_lang_read_success

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of successful udf read sub-transactions (for scan/query background udf jobs). Refer to the udf_sub_udf_complete, udf_sub_udf_error, udf_sub_udf_filtered_out, udf_sub_udf_timeout statistics for the containing udf operation statuses.

udf_sub_lang_write_success

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of successful udf write sub-transactions (for scan/query background udf jobs). Refer to the udf_sub_udf_complete, udf_sub_udf_error, udf_sub_udf_filtered_out, udf_sub_udf_timeout statistics for the containing udf operation statuses.

udf_sub_tsvc_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of udf subtransactions that failed with an error in the transaction service, before attempting to handle the transaction (for scan/query background udf jobs). For example protocol errors or security permission mismatch. Does not include timeouts. In strong-consistency enabled namespaces, this includes transactions against unavailable_partitions and dead_partitions.

udf_sub_tsvc_timeout

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of udf subtransactions that timed out in the transaction service, before attempting to handle the transaction (for scan/query background udf jobs).

udf_sub_udf_complete

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of completed udf subtransactions (for scan/query background udf jobs). Refer to the udf_sub_lang_delete_success, udf_sub_lang_error, udf_sub_lang_read_success, udf_sub_lang_write_success statistics for the underlying operation statuses.

udf_sub_udf_error

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of failed udf subtransactions (for scan/query background udf jobs). Does not include timeouts. Refer to the udf_sub_lang_delete_success, udf_sub_lang_error, udf_sub_lang_read_success, udf_sub_lang_write_success statistics for the underlying operation statuses.

udf_sub_udf_filtered_out

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.7

Number of udf subtransactions that did not happen because the record was filtered out with Filter Expressions.

udf_sub_udf_timeout

[cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Number of udf subtransactions that timed out (for scan/query background udf jobs). Refer to the udf_sub_lang_delete_success, udf_sub_lang_error, udf_sub_lang_read_success, udf_sub_lang_write_success statistics for the underlying operation statuses.

unavailable_partitions

[enterprise][instantaneous][integer]
Location:

Namespace

Monitoring:

alert

Introduced:

4.0

Number of unavailable partitions for this namespace (when using strong-consistency). This is the number of partitions that are unavailable when roster nodes are missing. Will turn into dead_partitions if still unavailable when all roster nodes are present.

Additional information

Example:

IF unavailable_partitions is not zero,
THEN critical ALERT.

Check for network issues and make sure the cluster forms properly.

note

Some partitions would typically be unavailable under some cluster split situations or when removing more than replication-factor number of nodes from a strong-consistency enabled namespace. Refer to the Configuring Strong Consistency and Consistency Management pages for further details.

unreplicated_records

[enterprise][instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

5.7

Number of unreplicated records in the namespace. Applicable only for namespaces operating under the strong-consistency mode.

used-bytes-disk

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Removed:

3.9

Total bytes of disk space used by this namespace on this node. Replaced with device_used_bytes as of version 3.9.

Additional information

Example:

Trending used-bytes-disk provides operations insight into how disk usage changes over time for this namespace.

used-bytes-memory

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Removed:

3.9

Total bytes of memory used by this namespace on this node. Replaced with memory_used_bytes as of 3.9.

Additional information

Example:

Trending used-bytes-memory provides operations insight into how memory usage changes over time for this namespace.

write-smoothing-period

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Removed:

Yes

Removed

xdr_bin_cemeteries

[enterprise][instantaneous][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

5.5

Number of tombstones with bin-tombstones. They are generated when bin convergence is enabled and a record is durably deleted.

xdr_client_delete_error

[enterprise][cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of delete requests initiated by XDR that failed on the namespace on this node. For the total number of XDR initiated delete requests against this namespace on this node (destination node), add up the relevant XDR client and from_proxy statistics: xdr_client_delete_success, xdr_client_delete_error, xdr_client_delete_timeout, xdr_client_delete_not_found, xdr_from_proxy_delete_success, xdr_from_proxy_delete_error, xdr_from_proxy_delete_timeout, xdr_from_proxy_delete_not_found.

xdr_client_delete_not_found

[enterprise][cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of delete requests initiated by XDR that failed on the namespace on this node due to the record not being found. For the total number of XDR initiated delete requests against this namespace on this node (destination node), add up the relevant XDR client and from_proxy statistics: xdr_client_delete_success, [xdr_client_delete_error](/reference/metrics#xdr_client_delete_error(, xdr_client_delete_timeout, xdr_client_delete_not_found, xdr_from_proxy_delete_success, xdr_from_proxy_delete_error, xdr_from_proxy_delete_timeout, xdr_from_proxy_delete_not_found.

xdr_client_delete_success

[enterprise][cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of delete requests initiated by XDR that succeeded on the namespace on this node. For the total number of XDR initiated delete requests against this namespace on this node (destination node), add up the relevant XDR client and from_proxy statistics: xdr_client_delete_success, xdr_client_delete_error, xdr_client_delete_timeout, xdr_client_delete_not_found, xdr_from_proxy_delete_success, xdr_from_proxy_delete_error, xdr_from_proxy_delete_timeout, xdr_from_proxy_delete_not_found.

xdr_client_delete_timeout

[enterprise][cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of delete requests initiated by XDR that timed out on the namespace on this node. For the total number of XDR initiated delete requests against this namespace on this node (destination node), add up the relevant XDR client and from_proxy statistics: xdr_client_delete_success, xdr_client_delete_error, xdr_client_delete_timeout, xdr_client_delete_not_found, xdr_from_proxy_delete_success, xdr_from_proxy_delete_error, xdr_from_proxy_delete_timeout, xdr_from_proxy_delete_not_found.

xdr_client_write_error

[enterprise][cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

4.5.1

Number of write requests initiated by XDR that failed on the namespace on this node. For the total number of XDR initiated write requests against this namespace on this node (destination node), add up the relevant XDR client and from_proxy statistics: xdr_client_write_success, xdr_client_write_error, xdr_client_write_timeout, xdr_from_proxy_write_success, xdr_from_proxy_write_error, xdr_from_proxy_write_timeout.

xdr_client_write_success

[enterprise][cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

4.5.1

Number of write requests initiated by XDR that succeeded on the namespace on this node. For the total number of XDR initiated write requests against this namespace on this node (destination node), add up the relevant XDR client and from_proxy statistics: xdr_client_write_success, xdr_client_write_error, xdr_client_write_timeout, xdr_from_proxy_write_success, xdr_from_proxy_write_error, xdr_from_proxy_write_timeout.

xdr_client_write_timeout

[enterprise][cumulative][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

4.5.1

Number of write requests initiated by XDR that timed out on the namespace on this node. For the total number of XDR initiated write requests against this namespace on this node (destination node), add up the relevant XDR client and from_proxy statistics: xdr_client_write_success, xdr_client_write_error, xdr_client_write_timeout, xdr_from_proxy_write_success, xdr_from_proxy_write_error, xdr_from_proxy_write_timeout.

xdr_from_proxy_delete_error

[enterprise][cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of errors for XDR delete transactions proxied from another node. For the total number of XDR initiated delete requests against this namespace on this node (destination node), add up the relevant XDR client and from_proxy statistics: xdr_client_delete_success, xdr_client_delete_error, xdr_client_delete_timeout, xdr_client_delete_not_found, xdr_from_proxy_delete_success, xdr_from_proxy_delete_error, xdr_from_proxy_delete_timeout, xdr_from_proxy_delete_not_found.

xdr_from_proxy_delete_not_found

[enterprise][cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of XDR delete transactions proxied from another node that resulted in not found. For the total number of XDR initiated delete requests against this namespace on this node (destination node), add up the relevant XDR client and from_proxy statistics: xdr_client_delete_success, xdr_client_delete_error, xdr_client_delete_timeout, xdr_client_delete_not_found, xdr_from_proxy_delete_success, xdr_from_proxy_delete_error, xdr_from_proxy_delete_timeout, xdr_from_proxy_delete_not_found.

xdr_from_proxy_delete_success

[enterprise][cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of successful XDR delete transactions proxied from another node. For the total number of XDR initiated delete requests against this namespace on this node (destination node), add up the relevant XDR client and from_proxy statistics: xdr_client_delete_success, xdr_client_delete_error, xdr_client_delete_timeout, xdr_client_delete_not_found, xdr_from_proxy_delete_success, xdr_from_proxy_delete_error, xdr_from_proxy_delete_timeout, xdr_from_proxy_delete_not_found.

xdr_from_proxy_delete_timeout

[enterprise][cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of timeouts for XDR delete transactions proxied from another node. For the total number of XDR initiated delete requests against this namespace on this node (destination node), add up the relevant XDR client and from_proxy statistics: xdr_client_delete_success, xdr_client_delete_error, xdr_client_delete_timeout, xdr_client_delete_not_found, xdr_from_proxy_delete_success, xdr_from_proxy_delete_error, xdr_from_proxy_delete_timeout, xdr_from_proxy_delete_not_found.

xdr_from_proxy_write_error

[enterprise][cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of errors for XDR write transactions proxied from another node. For the total number of XDR initiated write requests against this namespace on this node (destination node), add up the relevant XDR client and from_proxy statistics: xdr_client_write_success, xdr_client_write_error, xdr_client_write_timeout, xdr_from_proxy_write_success, xdr_from_proxy_write_error, xdr_from_proxy_write_timeout.

xdr_from_proxy_write_success

[enterprise][cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of successful XDR write transactions proxied from another node. For the total number of XDR initiated write requests against this namespace on this node (destination node), add up the relevant XDR client and from_proxy statistics: xdr_client_write_success, xdr_client_write_error, xdr_client_write_timeout, xdr_from_proxy_write_success, xdr_from_proxy_write_error, xdr_from_proxy_write_timeout.

xdr_from_proxy_write_timeout

[enterprise][cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

4.5.1

Number of timeouts for XDR write transactions proxied from another node. For the total number of XDR initiated write requests against this namespace on this node (destination node), add up the relevant XDR client and from_proxy statistics: xdr_client_write_success, xdr_client_write_error, xdr_client_write_timeout, xdr_from_proxy_write_success, xdr_from_proxy_write_error, xdr_from_proxy_write_timeout.

xdr_tombstones

[enterprise][instantaneous][integer]
Location:

Namespace

Monitoring:

watch

Introduced:

5.0

Number of tombstones on this node which are created by XDR for non-durable client deletes. This includes both master and prole.

xdr_write_error

[enterprise][cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Removed:

4.5.1

Number of write requests initiated by XDR that failed on the namespace on this node. For the total number of XDR initiated write requests against this namespace on this node (destination node), sum up the xdr_write_success, xdr_write_timeout and xdr_write_error statistics. Replaced with xdr_client_write_error as of version 4.5.1.

xdr_write_success

[enterprise][cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Removed:

4.5.1

Number of write requests initiated by XDR that succeeded on the namespace on this node. For the total number of XDR initiated write requests against this namespace on this node (destination node), sum up the xdr_write_success, xdr_write_timeout and xdr_write_error statistics. Replaced with xdr_client_write_success as of version 4.5.1.

xdr_write_timeout

[enterprise][cumulative][integer]
Location:

Namespace

Monitoring:

optional

Introduced:

3.9

Removed:

4.5.1

Number of write requests initiated by XDR that timed out on the namespace on this node. For the total number of XDR initiated write requests against this namespace on this node (destination node), sum up the xdr_write_success, xdr_write_timeout and xdr_write_error statistics. Replaced with xdr_client_write_timeout as of version 4.5.1.

Sets

device_data_bytes

[instantaneous][integer]
Location:

Sets

Monitoring:

optional

Introduced:

5.2

Device storage used by this set in bytes, for the data part (does not include index part). Value will be 0 if data is not stored on device. For size used in memory, refer to memory_data_bytes.

memory_data_bytes

[instantaneous][integer]
Location:

Sets

Monitoring:

optional

Introduced:

3.9

Memory used by this set in bytes, for the data part (does not include index part). Value will be 0 if data is not stored in memory. For size used on disk, refer to device_data_bytes (available in version 5.2+), or the set level object size histogram.

n_bytes-memory

[instantaneous][integer]
Location:

Sets

Monitoring:

optional

Removed:

3.9

Memory used by this set, in bytes.

n_objects

[instantaneous][integer]
Location:

Sets

Monitoring:

optional

Removed:

3.9

Total number of objects (master and all replicas) in this set on this node.

ns

[instantaneous][integer]
Location:

Sets

Monitoring:

optional

Namespace name this set belongs to.

objects

[instantaneous][integer]
Location:

Sets

Monitoring:

watch

Introduced:

3.9

Total number of objects (master and all replicas) in this set on this node. This is updated in real time and is not dependent on the nsup-period or nsup-hist-period configurations.

set

[instantaneous][integer]
Location:

Sets

Monitoring:

optional

The name of this set.

set-delete

[instantaneous][integer]
Location:

Sets

Monitoring:

optional

Removed:

3.14.0

If enabled nsup will remove all records in this set on its next run and continue to do so on subsequent run until there is nothing further to delete, after which this value will return to false.

tombstones

[enterprise][instantaneous][integer]
Location:

Sets

Monitoring:

watch

Introduced:

3.10

Total number of tombstones (master and all replicas) in this set on this node.

truncate_lut

[instantaneous][integer]
Location:

Sets

Monitoring:

optional

Introduced:

3.12

'The most covering truncate_lut for this set. See truncate or truncate-namespace.'

Sindex

delete_error

[cumulative][integer]
Location:

Sindex

Monitoring:

optional

Introduced:

3.9

Removed:

6.0

Number of errors while processing a delete transaction for this secondary index.

delete_success

[cumulative][integer]
Location:

Sindex

Monitoring:

optional

Introduced:

3.9

Removed:

6.0

Number of successful delete transactions processed for this secondary index.

entries

[instantaneous][integer]
Location:

Sindex

Monitoring:

optional

Number of secondary index entries for this secondary index. This is the number of records that have been indexed by this secondary index.

ibtr_memory_used

[instantaneous][integer]
Location:

Sindex

Monitoring:

optional

Removed:

6.0

Amount of memory, in bytes, the secondary index is consuming for the keys, as opposed to nbtr_memory_used which is the amount of memory the secondary index is consuming for the entries. The total being reported by si_accounted_memory.

keys

[instantaneous][integer]
Location:

Sindex

Monitoring:

optional

Removed:

6.0

Number of secondary keys for this secondary index.

load_pct

[instantaneous][integer]
Location:

Sindex

Monitoring:

optional

Progress in percentage of the creation of secondary index.

load_time

[instantaneous][integer]
Location:

Sindex

Monitoring:

optional

Introduced:

6.0

Time it took for the secondary index to be fully created.

load_time

[instantaneous][integer]
Location:

Sindex

Monitoring:

optional

Introduced:

6.0

Time it took for the secondary index to be fully created.

load_time

[instantaneous][integer]
Location:

Sindex

Monitoring:

optional

Introduced:

6.0

Time it took for the secondary index to be fully created.

loadtime

[instantaneous][integer]
Location:

Sindex

Monitoring:

optional

Removed:

6.0

Time it took for the secondary index to be fully created.

memory_used

[instantaneous][integer]
Location:

Sindex

Monitoring:

optional

Introduced:

6.0

Amount of memory, in bytes, consumed by the secondary index.

nbtr_memory_used

[instantaneous][integer]
Location:

Sindex

Monitoring:

optional

Removed:

6.0

Amount of memory, in bytes, the secondary index is consuming for the entries, as opposed to ibtr_memory_used which is the amount of memory the secondary index is consuming for the keys. The total being reported by si_accounted_memory.

query_agg

[cumulative][integer]
Location:

Sindex

Monitoring:

optional

Removed:

5.7

Number of query aggregations attempted for this secondary index on this node.

query_agg_avg_rec_count

[instantaneous][integer]
Location:

Sindex

Monitoring:

optional

Removed:

5.7

Average number of records returned by the aggregations underlying queries against this secondary index.

query_agg_avg_record_size

[instantaneous][integer]
Location:

Sindex

Monitoring:

optional

Removed:

5.7

Average size of the records returned by the aggregations underlying queries against this secondary index.

query_avg_rec_count

[instantaneous][integer]
Location:

Sindex

Monitoring:

optional

Removed:

5.7

Average number of records returned by the all queries against this secondary index (combines query_agg_avg_rec_count and query_lookup_avg_rec_count).

query_avg_record_size

[instantaneous][integer]
Location:

Sindex

Monitoring:

optional

Removed:

5.7

Average size of the records returned by all the queries against this secondary index (combines query_agg_avg_record_size and query_lookup_avg_record_size)

query_basic_abort

[cumulative][integer]
Location:

Sindex

Monitoring:

optional

Introduced:

5.7

Removed:

6.0

Number of basic queries aborted for this secondary index. In server 6.0, use si_query_long_basic_abort.

query_basic_avg_rec_count

[instantaneous][integer]
Location:

Sindex

Monitoring:

optional

Introduced:

5.7

Removed:

6.0

Average number of records returned by the lookup queries against this secondary index.

query_basic_complete

[cumulative][integer]
Location:

Sindex

Monitoring:

optional

Introduced:

5.7

Removed:

6.0

Number of basic queries completed for this secondary index. In server 6.0, use si_query_long_basic_complete.

query_basic_error

[cumulative][integer]
Location:

Sindex

Monitoring:

optional

Introduced:

5.7

Removed:

6.0

Number of basic queries that returned error for this secondary index. In server 6.0, use si_query_long_basic_error.

query_lookup_avg_rec_count

[instantaneous][integer]
Location:

Sindex

Monitoring:

optional

Removed:

5.7

Average number of records returned by the lookup queries against this secondary index. Renamed to query_basic_avg_rec_count in server version 5.7.

query_lookup_avg_record_size

[instantaneous][integer]
Location:

Sindex

Monitoring:

optional

Removed:

5.7

Average size of the records returned by the lookup queries against this secondary index.

query_lookups

[cumulative][integer]
Location:

Sindex

Monitoring:

optional

Removed:

5.7

Number of lookup queries ever attempted for this secondary index on this node. Removed in server version 5.7. Use query_basic_complete + query_basic_error + query_basic_abort instead.

query_reqs

[cumulative][integer]
Location:

Sindex

Monitoring:

optional

Removed:

5.7

Number of query requests ever attempted for this secondary index on this node (combines query_lookups and query_agg).

si_accounted_memory

[instantaneous][integer]
Location:

Sindex

Monitoring:

optional

Removed:

5.7

Amount of memory, in bytes, the secondary index is consuming. Removed in server version 5.7 the sum of ibtr_memory_used and nbtr_memory_used.

si_query_long_basic_abort

[cumulative][integer]
Location:

Sindex

Monitoring:

optional

Introduced:

6.0

Number of basic long secondary index queries aborted for this secondary index.

si_query_long_basic_complete

[cumulative][integer]
Location:

Sindex

Monitoring:

optional

Introduced:

6.0

Number of basic long secondary index queries completed for this secondary index.

si_query_long_basic_error

[cumulative][integer]
Location:

Sindex

Monitoring:

optional

Introduced:

6.0

Number of basic long secondary index queries that returned error for this secondary index.

si_query_short_basic_complete

[cumulative][integer]
Location:

Sindex

Monitoring:

optional

Introduced:

6.0

Number of basic short secondary index queries completed for this secondary index.

si_query_short_basic_error

[cumulative][integer]
Location:

Sindex

Monitoring:

optional

Introduced:

6.0

Number of basic short secondary index queries that returned error for this secondary index.

si_query_short_basic_timeout

[cumulative][integer]
Location:

Sindex

Monitoring:

optional

Introduced:

6.0

Short queries are not monitored, so they cannot be aborted. They might time out, which is reflected in this statistic.

stat_delete_errs

[cumulative][integer]
Location:

Sindex

Monitoring:

optional

Removed:

3.9

Number of errors while processing a delete transaction for this secondary index. Replaced by delete_error as of version 3.9.

stat_delete_reqs

[cumulative][integer]
Location:

Sindex

Monitoring:

optional

Removed:

3.9

Number of attempts to process delete transactions for this secondary index. Refer to delete_success and delete_error stats as of version 3.9.

stat_delete_success

[cumulative][integer]
Location:

Sindex

Monitoring:

optional

Removed:

3.9

Number of successful delete transactions processed for this secondary index. Replaced by delete_success as of version 3.9.

stat_gc_recs

[cumulative][integer]
Location:

Sindex

Monitoring:

optional

Number of records that have been garbage collected out of the secondary index memory. Refer to si-gc-period and si-gc-max-units configuration parameters for tuning the secondary index garbage collection.

stat_gc_time

[cumulative][integer]
Location:

Sindex

Monitoring:

optional

Removed:

5.7

Amount of time spent processing garbage collection for the secondary index. Refer to si-gc-period and si-gc-max-units configuration parameters for tuning the secondary index garbage collection.

stat_write_errs

[cumulative][integer]
Location:

Sindex

Monitoring:

optional

Removed:

3.9

Number of errors while processing a write transaction for this secondary index. Replaced by write_error as of version 3.9.

stat_write_reqs

[cumulative][integer]
Location:

Sindex

Monitoring:

optional

Removed:

3.9

Number of attempts to process write transactions for this secondary index. Refer to write_success and write_error stats as of version 3.9.

stat_write_success

[cumulative][integer]
Location:

Sindex

Monitoring:

optional

Removed:

3.9

Number of successful write transactions processed for this secondary index. Replaced by write_success as of version 3.9.

write_error

[cumulative][integer]
Location:

Sindex

Monitoring:

optional

Introduced:

3.9

Removed:

6.0

Number of errors while processing a write transaction for this secondary index.

write_success

[cumulative][integer]
Location:

Sindex

Monitoring:

optional

Introduced:

3.9

Removed:

6.0

Number of successful write transactions processed for this secondary index.

Statistics

aggr_scans_failed

[cumulative]
Location:

Statistics

Monitoring:

Introduced:

3.6.0

Removed:

3.9

Number of scans aborted. Introduced in 3.6.0. Moved to namespace level and renamed to scan_aggr_error as of 3.9.'

aggr_scans_succeeded

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.6.0

Removed:

3.9

Number of aggregation scans that completed successfully. Moved to namespace level and renamed to scan_aggr_complete as of 3.9.

basic_scans_failed

[cumulative]
Location:

Statistics

Monitoring:

Introduced:

3.6.0

Removed:

3.9

Number of basic scans that failed. Moved to namespace level and renamed to scan_basic_error as of 3.9.

basic_scans_succeeded

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.6.0

Removed:

3.9

Number of basic scans that completed successfully. Moved to namespace level and renamed to scan_basic_complete as of 3.9.

batch_error

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.9

Removed:

4.4

Number of batch direct requests that were rejected because of errors.

batch_errors

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of batch direct requests that 6were rejected because of errors. Replaced with batch_error in 3.9.

batch_index_complete

[cumulative][integer]
Location:

Statistics

Monitoring:

watch

Introduced:

3.6.0

Number of batch index requests completed.

batch_index_created_buffers

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.6.4

Number of 128KB response buffers created. Response buffers are created when there are no buffers left in the pool. If this number consistently increases and there is available memory, then batch-max-unused-buffers should be increased.

batch_index_delay

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

4.1

Number of times a batch index response buffer has been delayed (WOULDBLOCK on the send). Note that the number of times a batch index transaction is completely abandoned because it went over its overall allocated time after being delayed is counted under the batch_index_error statistic and will have a WARNING log message associated.

batch_index_destroyed_buffers

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.6.4

Number of 128KB response buffers destroyed. Response buffers are destroyed when there is no slot left to put the buffer back into the pool. The maximum response buffer pool size is batch-max-unused-buffers.

batch_index_error

[cumulative][integer]
Location:

Statistics

Monitoring:

alert

Introduced:

3.9

Number of batch index requests that completed with an error. For example, if the client has timed out but the server is still attempting to send response buffers back. Another occurrence is if the server abandons the transaction due to encountering delays (WOULDBLOCK on send) of more than twice the total timeout set by the client (or 30 seconds if not set) when sending response buffers back (this would be accompanied by a WARNING log message). Note that each encountered delay is counted under the batch_index_delay statistic.

Additional information

Example:

Compare batch_index_error to batch_index_complete and IF ratio is higher than acceptable THEN alert operations to investigate.

batch_index_errors

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.6.0

Removed:

3.9

Number of batch index requests that were rejected because of errors. Replaced with batch_index_error in 3.9.

batch_index_huge_buffers

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.6.4

Number temporary response buffers created that exceeded 128KB. Huge buffers are created when one of the records is retrieved that is greater than 128KB. Huge records do not benefit from batching and can result in excessive memory thrashing on the server. The batch_index_created_buffers and batch_index_destroyed_buffers do include the huge buffers created and destroyed.

batch_index_initiate

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.6.0

Number of batch index requests received.

batch_index_proto_compression_ratio

[enterprise][moving average][decimal]
Location:

Statistics

Monitoring:

optional

Introduced:

4.8

Measures the average compressed size to uncompressed size ratio for protocol message data in batch index responses. Thus 1.000 indicates no compression and 0.100 indicates a 1:10 compression ratio (90% reduction in size).

Additional information

The compression ratio is a moving average. It is calculated based on the most recent client responses. If the response message data changes over time then the compression ratio will change with it. In case of a sudden change in response data, the indicated compression ratio may lag behind a bit. As a rule of thumb, assume that the compression ratio covers the most recent 100,000 to 1,000,000 client responses.

batch_index_proto_uncompressed_pct

[enterprise][instantaneous][decimal]
Location:

Statistics

Monitoring:

optional

Introduced:

4.8

Measures the percentage of batch index responses with uncompressed protocol message data. Thus 0.000 indicates all responses with compressed data, and 100.000 indicates no responses with compressed data. For example, if protocol message data compression is not used, this metric will remain set to 0.000. If protocol message data compression is then turned on and all responses are compressed, this metric will remain set to 0.000. The only way this metric will ever be set to a value different than 0.000 is if compression is used, but some responses are not compressed (which happens when the uncompressed size is so small that the server does not try to compress, or when the compression fails).

Additional information

The percentage is a moving average. It is calculated based on the most recent client responses. If the response message data changes over time then the percentage will change with it. In case of a sudden change in response data, the indicated percentage may lag behind a bit. As a rule of thumb, assume that the percentage covers the most recent 100,000 to 1,000,000 client responses.

batch_index_queue

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.6.0

Number of batch index requests (transactions count) processed and response buffer blocks used on each batch queue. Format: <q1 requests> :<q1 buffers> ,<q2 requests> :<q2 buffers>,... The buffer block counter is actually decremented on batch responses before the transaction count is decremented. Therefore, it is possible for a buffer slot becomes available on the queue and a new batch transaction count is incremented before the previous batch command count is decremented. It is also possible that multiple transactions came in for a thread for which none of the response buffers has been created yet. Finally, batch_index_huge_buffers are counted as part of the buffer blocks used on each batch queue.

batch_index_timeout

[cumulative][integer]
Location:

Statistics

Monitoring:

watch

Introduced:

3.6.0

Number of batch index requests that timed-out on the server before being processed. Those would be caused by a batch subtransaction that has timed out for this batch index transaction. The overall time allowed for a batch-index transaction on the server is not bound, except if a delay is encountered (WOULDBLOCK on send). As of version 4.1, the overall batch index transaction max delay time is twice the total timeout set by the client or 30 seconds if there are no timeout set by the client.

batch_index_unused_buffers

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.6.0

Number of available 128 KB response buffers currently in buffer pool.

batch_initiate

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

4.4

Number of batch direct requests received.

batch_queue

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

4.4

Number of batch direct requests remaining on the queue awaiting processing.

batch_timeout

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

4.4

Number of batch direct requests that timed-out on the server before being processed.

batch_tree_count

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of tree lookups for all batch direct requests.

client_connections

[instantaneous][integer]
Location:

Statistics

Monitoring:

alert

Number of active client connections to this node. Also available in the log on the fds proto ticker line.

Additional information

Example:

IF client_connections is below an expected low value,
THEN this condition might indicate a problem with the network between clients and server. IF client_connections is above an expected high value,
THEN this condition might indicate a problem with clients rapidly opening and closing sockets. IFclient_connections is at or near proto_fd_max,
THEN the Aerospike Server is either currently unable to accept new connections or might soon be unable to do so.

client_connections_closed

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

5.6

Number of client connections that have been closed. One of client_connections_opened or client_connections_closed should be closely monitored or alerted against. Also available in the log on the fds proto ticker line.

client_connections_opened

[cumulative][integer]
Location:

Statistics

Monitoring:

alert

Introduced:

5.6

Number of client connections created to this node since the node was started. One of client_connections_opened or client_connections_closed should be closely monitored or alerted against. Also available in the log on the fds proto ticker line.

Additional information

Example:

IF client_connections_opened changes unexpectedly without clients having been added or removed, or a significant change in workload having occurred,
THEN this condition might indicate a slow down on a node or a connectivity issue on the node.

cluster_clock_skew

[enterprise][instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

4.0

Removed:

4.0.0.4

Current maximum clock skew in milliseconds between nodes cluster. Would trigger stop writes when breaching the cluster_clock_skew_stop_writes_sec threshold. Replaced by cluster_clock_skew_ms as of version 4.0.0.4.

cluster_clock_skew_ms

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

4.0.0.4

Current maximum clock skew in milliseconds between nodes in a cluster. Will trigger clock_skew_stop_writes when breaching the cluster_clock_skew_stop_writes_sec threshold. This threshold is normally 20 seconds for strong-consistency namespaces on any Aerospike version, or 40 seconds for AP namespaces where nsup is enabled (i.e. nsup-period is not zero) and the Aerospike version is 4.5.1 or later.

cluster_clock_skew_stop_writes_sec

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

4.0

The threshold at which any namespace that is set to strong-consistency will stop accepting writes due to clock skew (cluster_clock_skew_ms).
Note that this value is in seconds, not milliseconds.
Note also that although this value will show as 0 for AP namespaces, from Aerospike version 4.5.1 onward, these namespaces will stop accepting writes if nsup is enabled (i.e. nsup-period is not zero) and the clock skew exceeds 40 seconds.

cluster_generation

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

4.3

A 64 bit unsigned integer incremented on a node for every successful cluster partition re-balance or transition to orphan state. This is a node local value and does not need to be the same across the cluster.

cluster_integrity

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

When false, indicates integrity issues within the cluster, meaning that some nodes are either faulty or dead. A node in the succession list is deemed faulty if the node is alive and it reports to be an orphan or is part of some other cluster. Another condition for a faulty node would be for it to be alive but having a clustering protocol identifier that does not match the rest of the cluster. When true, indicates that the cluster is in a whole and complete state (as far as the nodes that it sees and is able to connect to all concerned). Information about a cluster integrity fault is also logged to the server log file repeatedly.

cluster_is_member

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.13.0

When false, indicates that the node is not joined to a cluster; that is, it is an orphan. When true, indicates that the node is joined to a cluster.

cluster_key

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Randomly generated 64 bit hexadecimal string used to name the last Paxos cluster state agreement.

cluster_max_compatibility_id

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

5.0.0

Each node has a compatibility ID that is an integer based on the node's Aerospike Server version. During upgrades, this value is used to determine software compatibility. cluster_max_compatibility_id indicates the cluster's maximum software version.

See also cluster_min_compatibility_id.

cluster_min_compatibility_id

[instantaneous]
Location:

Statistics

Monitoring:

Introduced:

5.0.0

Each node has a compatibility ID that is an integer based on the node's Aerospike Server version. During upgrades, this value is used to determine software compatibility. cluster_min_compatibility_id indicates the cluster's minimum software version.

See also cluster_max_compatibility_id.

cluster_principal

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

4.3

This specifies the Node ID of the current cluster principal. Will be '0' on an orphan node.

cluster_size

[instantaneous][integer]
Location:

Statistics

Monitoring:

alert

Size of the cluster. Can be checked to make sure the size of the cluster is the expected one after adding or removing a node. This should be checked across all nodes in a cluster.

Additional information

Example:

IF cluster_size does not equal the expected cluster size and the cluster is not undergoing maintenance,
THEN your operations group needs to investigate.

data-used-bytes-memory

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Amount of memory occupied by record data. Removed in 3.9. Use namespace level statistics instead.

demarshal_error

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.9

Number of errors during the demarshal step.

early_tsvc_batch_sub_error

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.9

Number of errors early in the transaction for batch subtransactions. For example, bad/unknown namespace name or security authentication errors.

early_tsvc_client_error

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.9

Number of errors early in the transaction for direct client requests. Those include transactions hitting the proto-fd-max, transactions with a bad/unknown namespace name or security authentication errors. Those also include cases where partitions are unavailable in AP mode (when clients attempt transactions against an orphan node).

early_tsvc_from_proxy_batch_sub_error

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

4.5.1

Number of errors early in the transaction for batch subtransactions proxied from another node. For example, bad/unknown namespace name or security authentication errors.

early_tsvc_from_proxy_error

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

4.5.1

Number of errors early in the transaction for transactions, other than batch subtransactions, proxied from another node, for example, bad/unknown namespace name or security authentication errors.

early_tsvc_ops_sub_error

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

4.7

Number of errors early in an internal ops subtransaction (ops scan/query). For example, bad/unknown namespace name or security authentication errors.

early_tsvc_udf_sub_error

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.9

Number of errors early in the transaction for udf subtransactions. For example, bad/unknown namespace name or security authentication errors.

err_duplicate_proxy_request

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of duplicate proxy errors.

err_out_of_space

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of writes resulting in disk out of space errors. Use namespace level stop_writes instead.

Additional information

Example:

IF err_out_of_space is increasing THEN one or more storage devices have reached capacity and the Aerospike cannot write .

err_replica_non_null_node

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of errors during cluster state exchange because of unexpected replica node information.

err_replica_null_node

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of errors during cluster state exchange because of missing replica node information.

err_rw_cant_put_unique

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of write transactions aborted because write required unique and record existed already.

err_rw_pending_limit

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of read/write transactions failed on 'hot keys'. Replaced with fail_key_busy at namespace level in 3.9.

Additional information

Example:

IF the application is not expected to have hot keys and err_rw_pending_limit rate of change exceeds expectations,
THEN this condition might indicate an application issue.

err_rw_request_not_found

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of read/write transactions started but could not find record in rw hash after the replica side of the transaction is processed (due to timeout that would remove the record from rw hash).

err_storage_defrag_fd_get

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

Yes

Removed

err_storage_queue_full

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

As of version 3.9, included in client_write_error stat at the namespace level. Will also cause warning level errors in the log file. Number of non-read requests failed due to disk being too backed up. Will return 'DEVICE_OVERLOAD' failure to client. See also storage_max_write_cache. Refer to this article for further details on the type of errors that will increment this statistic.

err_sync_copy_null_master

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of errors during cluster state exchange because of missing master node information.

err_sync_copy_null_node

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

Yes

Number of errors during cluster state exchange because of missing general node information.

err_tsvc_requests

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of failures where the execution is not even attempted. Some examples: partition imbalance, partition not found, transaction prepare error, write during set-delete or unknown namespace in protocol request. As of version 3.9, such errors are reported as client_tsvc_error or under the client_write_error at the namespace level.

err_tsvc_requests_timeout

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of failures where the execution times out while in the transaction queue. As of version 3.9, moved under the client_tsvc_timeout stat at the namespace level.

err_write_fail_bin_exists

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of write requests resulting in error 'bin exists'. As of version 3.9, included in client_write_error stat at the namespace level. Will also cause warning level errors in the log file.

err_write_fail_bin_name

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of write requests resulting in error 'bin name'. As of version 3.9, included in client_write_error stat at the namespace level. Will also cause warning level errors in the log file.

err_write_fail_bin_notfound

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of write requests resulting in error 'bin not found'. As of version 3.9, included in client_write_error stat at the namespace level. Will also cause warning level errors in the log file.

err_write_fail_forbidden

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of write requests failed because a write transaction is being attempted on a set still being deleted. As of version 3.9, moved to fail_xdr_forbidden stat at the namespace level that tracks only transactions failing due to configuration restrictions (see config options allow-xdr-writes and allow-nonxdr-writes).

err_write_fail_generation

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of write requests failed because of generation mismatch. As of version 3.9, moved to fail_generation stat at the namespace level.

err_write_fail_generation_xdr

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.8.1

Number of write requests from XDR that failed because of generation mismatch.

err_write_fail_incompatible_type

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

For data-in-index configuration, Number of write requests which are not integer. As of version 3.9, included in client_write_error stat at the namespace level. Will also cause warning level errors in the log file.

err_write_fail_key_exists

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of write transactions that failed because the key already exists. As of version 3.9, included in client_write_error stat at the namespace level. Will also cause warning level errors in the log file.

err_write_fail_key_mismatch

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of failed requests due to key mismatch, occurs when key is stored in Aerospike and key check is requested on the transaction. As of version 3.9, included in client_write_error stat at the namespace level. Will also cause warning level errors in the log file.

err_write_fail_not_found

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of write transactions that failed due to the key not found. As of version 3.9, included in client_write_error stat at the namespace level. Will also cause warning level errors in the log file.

err_write_fail_noxdr

[enterprise][cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.8.1

Number of writes rejected because XDR was not running. (Only in effect when configuration xdr_stop_writes_noxdr is on.

err_write_fail_parameter

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of write transactions that failed because of a bad parameter from application code. As of version 3.9, included in client_write_error stat at the namespace level. Will also cause warning level errors in the log file.

err_write_fail_prole_delete

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of replica delete failures because the replica record is not found.

err_write_fail_prole_generation

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of prole write failures because of generation mismatch.

err_write_fail_prole_unknown

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of prole write failures with unknown errors.

err_write_fail_record_too_big

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of write failures due to record being too big (bigger than write-block-size). As of version 3.9, moved to fail_record_too_big stat at the namespace level.

err_write_fail_unknown

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of write failures with unknown errors.

fabric_bulk_recv_rate

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.11.1.1

Rate of traffic (bytes/sec) received by the fabric bulk channel during the last ticker-interval (every 10 seconds by default).

fabric_bulk_send_rate

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.11.1.1

Rate of traffic (bytes/sec) sent by the fabric bulk channel during the last ticker-interval (every 10 seconds by default).

fabric_connections

[instantaneous][integer]
Location:

Statistics

Monitoring:

watch

Introduced:

3.9

Number of active fabric connections to this node. Also available in the log on the fds proto ticker line.

fabric_connections_closed

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

5.6

Number of fabric connections that have been closed. Also available in the log on the fds proto ticker line.

fabric_connections_opened

[cumulative][integer]
Location:

Statistics

Monitoring:

alert

Introduced:

5.6

Number of fabric connections created to this node since the node was started. Also available in the log on the fds proto ticker line.

Additional information

Example:

IF fabric_connections_opened is unexpectedly changing,
THEN alert as this condition would indicate a connectivity problem with a node or a cluster change.

fabric_ctrl_recv_rate

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.11.1.1

Rate of traffic (bytes/sec) received by the fabric ctrl channel during the last ticker-interval (every 10 seconds by default).

fabric_ctrl_send_rate

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.11.1.1

Rate of traffic (bytes/sec) sent by the fabric ctrl channel during the last ticker-interval (every 10 seconds by default).

fabric_meta_recv_rate

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.11.1.1

Rate of traffic (bytes/sec) received by the fabric meta channel during the last ticker-interval (every 10 seconds by default).

fabric_meta_send_rate

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.11.1.1

Rate of traffic (bytes/sec) sent by the fabric meta channel during the last ticker-interval (every 10 seconds by default).

fabric_msgs_rcvd

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.11.1.1

Number of messages received via the fabric layer from other nodes.

fabric_msgs_sent

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.11.1.1

Number of messages sent via the fabric layer to other nodes.

fabric_rw_recv_rate

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.11.1.1

Rate of traffic (bytes/sec) received by the fabric meta channel during the last ticker-interval (every 10 seconds by default).

fabric_rw_send_rate

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.11.1.1

Rate of traffic (bytes/sec) sent by the fabric rw channel during the last ticker-interval (every 10 seconds by default).

failed_best_practices

[instantaneous][boolean]
Location:

Statistics

Monitoring:

optional

Introduced:

5.7

Indicates true if any of the best-practices, which are checked when the server starts, were violated, otherwise failed_best_practices will indicate false. Each failed best-practice will log a unique warning message and a list of failed best-practices can be queried using the best-practices info command.

free-pct-disk

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Percentage of disk available. As of version 3.9, moved to device_free_pct stat at the namespace level.

Additional information

Example:

IF free-pct-disk fall below 25%,
THEN Cluster is reaching capacity.

Consider adding nodes or increasing per-node capacity.

free-pct-memory

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Percentage of memory available. As of version 3.9, moved to memory_free_pct stat at the namespace level.

geo_region_query_cells

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

As of version 3.9, moved to the namespace level.

geo_region_query_falspos

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

As of version 3.9, moved to the namespace level.

geo_region_query_points

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

As of version 3.9, moved to the namespace level.

geo_region_query_reqs

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

As of version 3.9, moved to the namespace level.

heap_active_kbytes

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.10.1

The amount of memory in in-use pages, in KiB. An in-use page is a page that has some allocated memory (either partial or full).

heap_allocated_kbytes

[instantaneous][integer]
Location:

Statistics

Monitoring:

watch

Introduced:

3.10.1

The amount of memory, in KiB, allocated by the asd daemon. The heap_allocated_kbytes / heap_active_kbytes ratio (6.0 or later) and heap_allocated_kbytes / heap_mapped_kbytes ratio (prior to 6.0) (also provided under heap_efficiency_pct) provide a picture of the fragmentation of the heap. This is for all memory usage except for the shared memory parts (for the primary index in the Enterprise Edition).

heap_efficiency_pct

[instantaneous][integer]
Location:

Statistics

Monitoring:

alert

Introduced:

3.10.1

Provides an indication of the jemalloc heap fragmentation. This represents the heap_allocated_kbytes / heap_active_kbytes ratio. A lower number indicates a higher fragmentation rate.

Additional information

Example:

IF heap_efficiency_pct goes below 60% or 50% (depending on configuration,
THEN advise your operations group to investigate.

heap_mapped_kbytes

[instantaneous][integer]
Location:

Statistics

Monitoring:

watch

Introduced:

3.10.1

The amount of memory in mapped pages, in KiB, i.e., the amount of memory that JEM received from the Linux kernel. It should be a multiple of 4, as that would be the typical page size (4096 bytes).

heap_site_count

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.14.1

The heap_site_count represents the number of distinct sites in the server code (specific locations in server functions) that have allocated heap memory designated for tracking (as governed by the debug-allocations setting) from the time when the server was started. The heap_site_count is only nonzero when debug-allocations is set to some value other than none . The heap_site_count value can only increase.

heartbeat_connections

[instantaneous][integer]
Location:

Statistics

Monitoring:

watch

Introduced:

3.9

Number of active heartbeat connections to this node. Also available in the log on the fds proto ticker line.

heartbeat_connections_closed

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

5.6

Number of heartbeat connections that have been closed. Also available in the log on the fds proto ticker line.

heartbeat_connections_opened

[cumulative][integer]
Location:

Statistics

Monitoring:

alert

Introduced:

5.6

Number of heartbeat connections created to this node since the node was started. Also available in the log on the fds proto ticker line.

Additional information

Example:

IF heartbeat_connections_opened is unexpectedly changing,
THEN alert as this condition would indicate a connectivity problem with a node or a cluster change.

heartbeat_received_foreign

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Total number of heartbeats received from remote nodes.

heartbeat_received_self

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Total number of multicast heartbeats from this node received by this node. Will be 0 for mesh.

index-used-bytes-memory

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Amount of memory occupied by the index measured in bytes. Use memory_used_index_bytes at the namespace level as of version 3.9.

info_complete

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.9

Number of info requests completed.

info_queue

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Number of info requests pending in info queue.

migrate_allowed

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

This indicates whether migrations are allowed or not on a node. true when allowed, false when not. When there is a change in a cluster, this statistic's value will change to false until the rebalance is completed across all namespaces. The rebalance is the step that figures out all partition migrations that need to be scheduled. The rebalance is not the migrations itself but the process that precedes the partitions migrations. migrate_allowed true indicates that all migrations related statistics have been set and can be leveraged programmatically, for example, migrate_partitions_remaining to check if migrations are ongoing or not).

migrate_msgs_recv

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.8.3

Number of migrate messages received.

migrate_msgs_sent

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.8.3

Number of migrate messages sent.

migrate_num_incoming_accepted

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.8.3

Number of migrate requests accepted from other nodes.

migrate_num_incoming_refused

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.8.3

Number of migrate requests refused from other nodes due to reaching migrate-max-num-incoming limit.

migrate_partitions_remaining

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.8.3

This is the number of partitions remaining to migrate (in either direction). When migrate_allowed is true, this is the stat which will accurately determine if migrations are complete for a single node across all namespaces. There could be a short period after a reclustering event when this statistic shows 0 but the migrations have not started yet. During such time, migrate_allowed would return false.

migrate_progress_recv

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.8.3

Number of partitions currently being received on this node. Replaced by migrate_rx_partitions_active at the namespace level in version 3.9.

migrate_progress_send

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.8.3

Number of partitions currently being sent out from this node. Replaced by migrate_tx_partitions_active at the namespace level in version 3.9.

migrate_rx_objs

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.8.3

Number of partitions currently migrating to this node. Replaced by migrate-rx-instance-count in 3.8.3.

migrate_tx_objs

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.8.3

Number of partitions pending migration out of this node. Replaced by migrate-tx-instance-count in 3.8.3.

objects

[instantaneous][integer]
Location:

Statistics

Monitoring:

watch

Total number of replicated objects on this node. Includes master and replica objects.

Additional information

Example:

Trending objects provides operations insight into object fluctuations over time.

ongoing_write_reqs

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of records currently in write transactions.

partition_absent

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of partitions for which this node is not either master or replica.

partition_actual

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of partitions for which this node is acting as master.

partition_desync

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of partitions that are not yet synced with the rest of the cluster. nodes

partition_object_count

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Total number of objects. Removed as of version 3.9. Use partition-info for full partition info dump.

partition_ref_count

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of partitions that are currently being read.

partition_replica

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of partitions for which this node is acting as replica.

paxos_principal

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Identifier for the node in which this node believes to be the Paxos Principal.

process_cpu_pct

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

4.7.0

Percentage of CPU usage by the asd process.

proxy_action

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of proxy requests received from other nodes.

proxy_in_progress

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.3.21

Number of proxies in progress. Also called proxy hash. The transaction's ttl (client set timeout or transaction-max-ms is checked every 5ms (6.0 version and above) when waiting in the proxy-hash.

proxy_initiate

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of scan requests initiated. As of 3.9, this is tracked under the namespace level statistics proxy_complete and proxy_error.

proxy_retry

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of retried proxy requests to other nodes.

proxy_retry_new_dest

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of proxy retries this node delivered to a new destination.

proxy_retry_q_full

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of proxy retries failed because fabric queue was full.

proxy_retry_same_dest

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of proxy retries this node delivered to the same destination.

proxy_unproxy

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of re-executions (from scratch) because of unavailability of proxy node.

queries_active

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

6.0

Number of queries (formerly scans) currently active. The queries_active stat is shared by both primary index (PI) queries and secondary index (SI) queries. Only long queries are monitored.

query_abort

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of user aborted queries seen by this node. Refer to query_agg_abort and query_lookup_abort at the namespace level as of version 3.9.

query_agg

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of aggregations run on this node seen by this node. Moved to namespace level as of version 3.9.

query_agg_abort

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of aggregations aborted by the user seen by this node. Moved to namespace level as of version 3.9.

query_agg_avg_rec_count

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Average number of records returned by aggregations seen by this node. Moved to namespace level as of version 3.9.

query_agg_err

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of aggregations fail due to an internal error seen by this node. Moved to namespace level query_agg_error as of version 3.9.

query_agg_success

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of aggregations which succeeded on this node without error seen by this node. Moved to namespace level as of version 3.9.

query_avg_rec_count

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Average number of records returned of all queries which executed on this node. Refer to query_agg_avg_rec_count and query_lookup_avg_rec_count at the namespace level as of version 3.9.

query_bad_records

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

Yes

Number of false positive entries in secondary index queries.

query_fail

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of queries which failed due to an internal error seen by this node. Moved to namespace level as of version 3.9.

query_long_queue_full

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of long running queries queue full errors. Moved to namespace level as of version 3.9.

query_long_reqs

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of long running queries currently in process. Moved to namespace level as of version 3.9.

query_long_running

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

6.0

Number of long running queries ever attempted in the system (query selected record more than query_threshold).

query_lookup_abort

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of user aborted look-ups seen by this node. Moved to namespace level as of version 3.9.

query_lookup_avg_rec_count

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Average number of records returned by all look-ups seen by this node. Moved to namespace level as of version 3.9.

query_lookup_err

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of look-ups fail due to an error seen by this node. Moved to namespace level stat query_lookup_error as of version 3.9.

query_lookup_success

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of look-ups which succeeded on this node. Moved to namespace level as of version 3.9.

query_lookups

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of look-ups performed by this node. Moved to namespace level as of version 3.9.

query_reqs

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of query requests received by this node. Even very early failures would be counted here, as opposed to query_short_running and query_long_running which would tick a bit later. Moved to namespace level as of version 3.9.

query_short_queue_full

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of short running queries queue full errors. Moved to namespace level as of version 3.9.

query_short_reqs

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of short running queries currently in process. Moved to namespace level as of version 3.9.

query_short_running

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

6.0

Number of short running queries ever attempted in the system (query selected record less than query_threshold).

query_success

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of queries succeeded on this node. As of version 3.9, broken down to namespace level stats query_lookup_success, query_udf_bg_success and query_agg_success.

query_tracked

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Number of queries tracked by the system. (Number of queries which ran more than query untracked_time (default 1 sec)).

queue

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of pending requests waiting to execute. Replaced with tsvc_queue as of version 3.9.

read_dup_prole

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of requests sent for duplicate resolution.

reaped_fds

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Number of idle client connections closed.

Additional information

Example:

IF reaped_fds are growing more rapidly than normal THEN May indicate client[s] are opening and closing sockets too rapidly -- potential application issue.

record_locks

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of record index locks currently active in the node.

record_refs

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.10

Number of index records currently referenced.

repl_factor

[instantaneous][integer]
Location:

Namespace

Monitoring:

optional

Removed:

3.15.1.3

The replication factor for the namespace.
Replaced with effective_replication_factor.

rw_err_ack_badnode

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of acknowledgments from unexpected nodes.

rw_err_ack_internal

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of prole write acknowledgments failed due to internal errors.

rw_err_ack_nomatch

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of prole write acknowledgments started but went amiss/have mismatched information.

rw_err_dup_cluster_key

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of errors encountered during duplicate resolution because of cluster key mismatch.

rw_err_dup_internal

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of errors encountered during duplicate resolutions.

rw_err_dup_send

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of errors encountered during duplicate resolutions because of failure to send fabric messages.

rw_err_dup_write_cluster_key

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

Yes

Removed

rw_err_dup_write_internal

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

Yes

Removed

rw_err_write_cluster_key

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of replica write failures due to cluster key mismatch.

rw_err_write_internal

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of write requests failed because of internal errors (code errors).

rw_err_write_send

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of prole write acknowledgments fail because of failure in sending fabric message.

rw_in_progress

[instantaneous][integer]
Location:

Statistics

Monitoring:

alert

Introduced:

3.9

Number of rw transactions in progress. Also called rw hash. This tracks transaction parked on the rw hash while processing on other nodes (all write replicas, read duplicate resolutions). The transaction's ttl (client set timeout or transaction-max-ms is checked every 5ms in server 6.0 and later when waiting in the rw-hash.

Additional information

Example:

Depends on expected workload.

IF rw_in_progress is higher than expected, or IF this deviates more than acceptable from the established baseline over time,
THEN alert operations to investigate the cause. May indicate a slowdown on a particular node or overloading on the fabric.

scans_active

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.6.0

Removed:

6.0

Number of scans currently active. In server 6.0, use queries_active.

sindex-used-bytes-memory

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Amount of memory being occupied by secondary indexes across all namespaces. Replaced with memory_used_sindex_bytes at the namespace level as of version 3.9.

sindex_gc_activity_dur

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.3.10

Removed:

3.14

Sum of sindex GC thread activity (millisecond).

sindex_gc_garbage_cleaned

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.3.10

Removed:

5.7

Sum of secondary index garbage entries cleaned by sindex GC. Moved to namespace level as sindex_gc_cleaned in version 5.7.

sindex_gc_garbage_found

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.3.10

Removed:

5.7

Sum of secondary index garbage entries found by sindex GC.

sindex_gc_inactivity_dur

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.3.10

Removed:

3.14

Sum of sindex GC thread inactivity (millisecond).

sindex_gc_list_creation_time

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.3.10

Removed:

5.7

Sum of time spent in finding secondary index garbage entries by sindex GC (millisecond).

sindex_gc_list_deletion_time

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.3.10

Removed:

5.7

Sum of time spent in cleaning sindex garbage entries by sindex GC (millisecond).

sindex_gc_locktimedout

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.3.3

Removed:

4.2

Number of times sindex gc iteration timed out waiting for partition lock.

sindex_gc_objects_validated

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.3.10

Removed:

5.7

Number of secondary index entries processed by sindex GC.

sindex_gc_retries

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

4.2

Removed:

5.7

Number of retries when sindex GC cannot get sprigs lock. Replaced sindex_gc_locktimedout.

sindex_ucgarbage_found

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.3.3

Removed:

5.7

Number of un-cleanable garbage entries in the sindexes encountered through queries.

stat_cluster_key_err_ack_dup_trans_reenqueue

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of duplicate trans re-enqueued because of cluster key mismatch.

stat_cluster_key_err_ack_rw_trans_reenqueue

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

Yes

Number of Read/Write trans re-enqueued because of cluster key mismatch.

stat_cluster_key_partition_transaction_queue_count

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

Yes

Removed/unused

stat_cluster_key_prole_retry

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

Yes

Number of times a prole write was retried as a result of a cluster key mismatch.

stat_cluster_key_regular_processed

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

Yes

Number of successful transactions that passed the cluster key test.

stat_cluster_key_trans_to_proxy_retry

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

Yes

Number of times a proxy was redirected.

stat_cluster_key_transaction_reenqueue

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

Yes

Removed/unused

stat_compressed_pkts_received

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of compressed packages received.

stat_delete_success

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of successful record deletes. Moved to client_delete_success stat at the namespace level as of version 3.9.

stat_deleted_set_objects

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of deleted set objects as result of a 'set-delete' command. Refer to the set_deleted_objects at the namespace level.

stat_duplicate_operation

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of read/write transactions which require duplicate resolution.

stat_evicted_objects

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of objects evicted

Additional information

Example:

Trending stat_evicted_objects provides operations insight into system eviction behavior over time. Moved to evicted_objects stat at the namespace level as of version 3.9.

stat_evicted_objects_time

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Average expiry time (TTL) of the objects evicted in the last iteration.

stat_evicted_set_objects

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

Yes

Number of objects evicted from a Set due to set limits defined in Aerospike configuration.

stat_expired_objects

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of objects expired.

Additional information

Example:

Trending stat_expired_objects provides operations insight into system expiration behavior over time. Moved to expired_objects stat at the namespace level as of version 3.9.

stat_ldt_proxy

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of proxies for LDT records.

stat_nsup_deletes_not_shipped

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of deletes resulting from eviction/expiration etc. that are not shipped.

stat_proxy_errs

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of proxy requests returning errors. Moved to client_proxy_error/batch_sub_proxy_error stats at the namespace level as of version 3.9.

stat_proxy_reqs

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of proxy requests attempted. Refer to the different proxy statistics at the namespace level as of version 3.9.

stat_proxy_reqs_xdr

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of XDR operations that resulted in proxies.

stat_proxy_success

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of proxy requests served successfully. Refer to the client_proxy_complete/batch_sub_proxy_complete stats at the namespace level as of version 3.9.

stat_read_errs_notfound

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of read requests resulting in error : 'key not found'. Moved to client_read_not_found/batch_sub_read_not_found stats at the namespace level as of version 3.9.

stat_read_errs_other

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of read requests resulting in other errors. Moved to client_read_error/batch_sub_read_error stats at the namespace level as of version 3.9.

stat_read_reqs

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of total read requests attempted. Refer to the different *_read_* statistics at the namespace level as of version 3.9 for different read statistics from client, batch, and their different breakdown (success, error, not found, timeout).

stat_read_reqs_xdr

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of XDR read requests attempted. Refer to the different xdr_read_* stats as of version 3.9.

stat_read_success

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of read requests successful. As of version 3.8.3, this stat will not include reads initiated by XDR. As of version 3.9, refer to the client_read_success/batch_sub_read_success stats at the namespace level.

stat_rw_timeout

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of read and write requests failed because of timeout on the server. As of version 3.9, refer to the more specific stats at the namespace level.

stat_single_bin_records

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

Yes

Removed: Number of single bin records.

stat_slow_trans_queue_batch_pop

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

Yes

Number of times we moved a batch of trans from slow queue to fast queue.

stat_slow_trans_queue_pop

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

Yes

Number of trans that were moved from slow queue to fast queue.

stat_slow_trans_queue_push

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

Yes

Number of trans that we pushed onto the slow queue.

stat_write_errs

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of write requests resulting in errors. As of version 3.9, refer to the client_write_error stat at the namespace level.

stat_write_errs_notfound

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of errors returning key not found on a write request. As of version 3.9, this is included in the client_write_error stat at the namespace level.

stat_write_errs_other

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of non 'not found' errors on a write requests. As of version 3.9, this is included in the client_write_error stat at the namespace level.

stat_write_reqs

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of total writes requests attempted. As of version 3.9, refer to the client_write_* stats at the namespace level.

stat_write_reqs_xdr

[enterprise][cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of write requests from XDR. As of version 3.9, refer to the xdr_write_* stats at the namespace level.

stat_write_success

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of write requests successful. As of version 3.9, refer to the client_write_success stat at the namespace level.

stat_xdr_pipe_miss

[enterprise][cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.8.1

Number of log records that couldn't be written to the named pipe by the server. Generally happens when XDR end of pipe is closed.

stat_xdr_pipe_writes

[enterprise][cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.8.1

Number of log records that were written to the named pipe by the server.

stat_zero_bin_records

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of write_requests that failed because of zero bin records.

storage_defrag_corrupt_record

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of times the defrag thread encountered invalid records.

storage_defrag_wait

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

Yes

Number of times the defrag waited (called sleep).

sub-records

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of sub objects. Replaced with sub_objects stat in 3.9.

sub_objects

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.9

Number of LDT sub objects. Aggregated over the sub_objects stat at the namespace level.

system_free_mem_kbytes

[instantaneous][integer]
Location:

Statistics

Monitoring:

alert

Amount of free system memory in kilobytes. Note this includes buffers and caches, but not shared memory.

Additional information

Example:

IF system_free_mem_kbytes is abnormally low,
THEN this condition indicates server reaching the limits of the available RAM. Operations should investigate and potentially add nodes or increase per node RAM.

system_free_mem_pct

[instantaneous][integer]
Location:

Statistics

Monitoring:

alert

Percentage of free system memory. For versions prior to 3.16.0.4, the amount of shared memory used is wrongly reported as free. This was addressed as part of AER-5810.

Additional information

Example:

IF system_free_mem_pct is abnormally low,
THEN this condition indicates server reaching the limits of the available RAM. Operations should investigate and potentially add nodes or increase per node RAM.

system_kernel_cpu_pct

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

4.7.0

Percentage of CPU usage by processes running in kernel mode.

system_swapping

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

4.4.0.4

Boolean state, true indicate that the system is currently swapping RAM to disk

Additional information

Example:

IF system_swapping is ever true
THEN operations needs to investigate. Swapping can cause drastic performance fluctuations.

system_thp_mem_kbytes

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

5.7.0

Amount of memory in use by the Transparent Huge Page mechanism, in kilobytes.

system_total_cpu_pct

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

4.7.0

Percentage of CPU usage by all running processes. Equal to system_user_cpu_pct + system_kernel_cpu_pct.

Additional information
note

This metric reports the percentage CPU usage out of the (total number of CPUs * 100)% for an Aerospike node. For example, If a node has 10 CPUs, the system_total_cpu_pct, depending on the CPU usage may be reported between 0 and 1000%.

system_user_cpu_pct

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

4.7.0

Percentage of CPU usage by processes running in user mode.

threads_detached

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

5.6

Number of detached server threads currently running.

threads_joinable

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

5.6

Number of joinable server threads currently running.

threads_pool_active

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

5.6

Number of currently active threads in the server thread pool.

threads_pool_total

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

5.6

Total number of threads in the server thread pool.

time_since_rebalance

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

4.3.1

Number of seconds since the last reclustering event, either triggered via the recluster info command or by a cluster disruption (such as a node being add/removed or a network disruption).

total-bytes-disk

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Total size of disk (bytes). Refer to the namespace level device_total_bytes stat as of 3.9.

total-bytes-memory

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Total Size of memory (bytes). Refer to the memory-size configuration parameter.

transactions

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Total number of transactions executed by this node -- includes all reads, writes, and info commands.

tree_count

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of index trees currently active in the node

tree_gc_queue

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.10

This is the number of trees queued up, ready to be completely removed (partitions drop). Corresponds to the tree-gc-q entry in the log ticker.

tscan_aborted

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

Yes

Number of scans that were aborted. Removed as of 3.6.0.

tscan_initiate

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

Yes

Number of new scan requests initiated. Removed as of 3.6.0.

tscan_pending

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

Yes

Number of scan requests pending. Removed as of 3.6.0.

tscan_succeeded

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

Yes

Number of scan requests that have successfully finished. Removed as of 3.6.0.

tsvc_queue

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Introduced:

3.9

Removed:

4.7

Number of pending requests waiting to execute in the transaction queue. An increase in this metric would indicate that the server is not keeping up with the workload, typically due to a starving of transaction threads (transaction-threads-per-queue). A common cause would be high latencies on disk i/o. When picked up from the transaction queue, transactions are checked against the timeout (set by the client, or, if not set, configured under transaction-max-ms) and if over, will return a timeout and tick the client_tsvc_timeout metric. Removed in version 4.7, along with the removal of dedicated transaction threads and queues.

udf_bg_scans_failed

[cumulative]
Location:

Statistics

Monitoring:

Introduced:

3.6.0

Removed:

3.9

Number of scan background udf jobs that failed. Moved to scan_udf_bg_error at the namespace level as of version 3.9.

udf_bg_scans_succeeded

[cumulative]
Location:

Statistics

Monitoring:

Introduced:

3.6.0

Removed:

3.9

Number of scan background udf jobs that completed. Moved to scan_udf_bg_complete at the namespace level as of version 3.9.

udf_delete_err_others

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of errors encountered during UDF delete.

Additional information
note

As of version 3.9, UDF related statistics have been moved to the namespace level and broken down as follows:
- the client_udf_* stats for the udf call itself.
- the client_lang_* stats for the underlying operation statuses.
As of version 4.5.1, stats for proxied udf calls are also included:
- the from_proxy_udf_* stats for proxied udf calls themselves.
- the from_proxy_lang_* stats for the underlying operation statuses for proxied udf calls.
Similarly, for scan background udf and query background udf jobs:
- the scan_udf_bg_abort/complete/error and query_udf_bg_success/failure for the overall job.
- the udf_sub_udf_* stats for the udf calls triggered by those jobs.
- the udf_sub_lang_* stats for the underlying operation statuses of the udf calls triggered by those jobs.

udf_delete_reqs

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of UDF delete requests attempted. Refer to the new stats breakdown at the namespace level as of version 3.9: client_lang_delete_success, client_lang_error, and client_lang_timeout for the underlying operation itself and client_udf_* for the udf call itself.

Additional information
note

As of version 3.9, UDF related statistics have been moved to the namespace level and broken down as follows:
- the client_udf_* stats for the udf call itself.
- the client_lang_* stats for the underlying operation statuses.
As of version 4.5.1, stats for proxied udf calls are also included:
- the from_proxy_udf_* stats for proxied udf calls themselves.
- the from_proxy_lang_* stats for the underlying operation statuses for proxied udf calls.
Similarly, for scan background udf and query background udf jobs:
- the scan_udf_bg_abort/complete/error and [query_udf_bg_success](/reference/metrics#query_udf_bg_success] and query_udf_bg_failure for the overall job.
- the udf_sub_udf_* stats for the udf calls triggered by those jobs.
- the udf_sub_lang_* stats for the underlying operation statuses of the udf calls triggered by those jobs.

udf_delete_success

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of successful UDF delete operations. Refer to client_lang_delete_success at the namespace level as of version 3.9.

Additional information
note

As of version 3.9, UDF related statistics have been moved to the namespace level and broken down as follows:
- the client_udf_* stats for the udf call itself.
- the client_lang_* stats for the underlying operation statuses.
As of version 4.5.1, stats for proxied udf calls are also included:
- the from_proxy_udf_* stats for proxied udf calls themselves.
- the from_proxy_lang_* stats for the underlying operation statuses for proxied udf calls.
Similarly, for scan background udf and query background udf jobs:
- the scan_udf_bg_abort/complete/error and query_udf_bg_success and query_udf_bg_failure for the overall job.
- the udf_sub_udf_* stats for the udf calls triggered by those jobs.
- the udf_sub_lang_* stats for the underlying operation statuses of the udf calls triggered by those jobs.

udf_lua_errs

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of overall Lua errors. Refer to statistic client_lang_error at the namespace level as of version 3.9.

Additional information
note

As of version 3.9, UDF related statistics have been moved to the namespace level and broken down as follows:
- the client_udf_* stats for the udf call itself.
- the client_lang_* stats for the underlying operation statuses.
As of version 4.5.1, stats for proxied udf calls are also included:
- the from_proxy_udf_* stats for proxied udf calls themselves.
- the from_proxy_lang_* stats for the underlying operation statuses for proxied udf calls.
Similarly, for scan background udf and query background udf jobs:
- the scan_udf_bg_abort/complete/error and query_udf_bg_success and query_udf_bg_failure for the overall job.
- the udf_sub_udf_* stats for the udf calls triggered by those jobs.
- the udf_sub_lang_* stats for the underlying operation statuses of the udf calls triggered by those jobs.

udf_query_rec_reqs

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of record UDF calls in a query background udf job. Refer to query_udf_bg_success and query_udf_bg_failure at the namespace level as of version 3.9.

Additional information
note

As of version 3.9, UDF related statistics for scan background udf and query background udf jobs have been moved to the namespace level and broken down as follows:
- the scan_udf_bg_abort/complete/error and query_udf_bg_success and query_udf_bg_failure for the overall job.
- the udf_sub_udf_* stats for the udf calls triggered by those jobs.
- the udf_sub_lang_* stats for the underlying operation statuses of the udf calls triggered by those jobs.

udf_read_errs_other

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of unsuccessful UDF read operations.

Additional information
note

As of version 3.9, UDF related statistics have been moved to the namespace level and broken down as follows:
- the client_udf_* stats for the udf call itself.
- the client_lang_* stats for the underlying operation statuses.
As of version 4.5.1, stats for proxied udf calls are also included:
- the from_proxy_udf_* stats for proxied udf calls themselves.
- the from_proxy_lang_* stats for the underlying operation statuses for proxied udf calls.
Similarly, for scan background udf and query background udf jobs:
- the scan_udf_bg_abort/complete/error and query_udf_bg_success and query_udf_bg_failure for the overall job.
- the udf_sub_udf_* stats for the udf calls triggered by those jobs.
- the udf_sub_lang_* stats for the underlying operation statuses of the udf calls triggered by those jobs.

udf_read_reqs

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of UDF read requests attempted. Refer to the new stats breakdown at the namespace level as of version 3.9: client_lang_read_success and client_lang_error/client_lang_timeout for the underlying operation itself and client_udf_* for the udf call itself.

Additional information
note

As of version 3.9, UDF related statistics have been moved to the namespace level and broken down as follows:
- the client_udf_* stats for the udf call itself.
- the client_lang_* stats for the underlying operation statuses.
As of version 4.5.1, stats for proxied udf calls are also included:
- the from_proxy_udf_* stats for proxied udf calls themselves.
- the from_proxy_lang_* stats for the underlying operation statuses for proxied udf calls.
Similarly, for scan background udf and query background udf jobs:
- the scan_udf_bg_abort/complete/error and query_udf_bg_success and query_udf_bg_failure for the overall job.
- the udf_sub_udf_* stats for the udf calls triggered by those jobs.
- the udf_sub_lang_* stats for the underlying operation statuses of the udf calls triggered by those jobs.

udf_read_success

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of successful UDF read operations. Refer to the client_lang_read_success stat at the namespace level as of version 3.9.

Additional information
note

As of version 3.9, UDF related statistics have been moved to the namespace level and broken down as follows:
- the client_udf_* stats for the udf call itself.
- the client_lang_* stats for the underlying operation statuses.
As of version 4.5.1, stats for proxied udf calls are also included:
- the from_proxy_udf_* stats for proxied udf calls themselves.
- the from_proxy_lang_* stats for the underlying operation statuses for proxied udf calls.
Similarly, for scan background udf and query background udf jobs:
- the scan_udf_bg_abort/complete/error and query_udf_bg_success and query_udf_bg_failure for the overall job.
- the udf_sub_udf_* stats for the udf calls triggered by those jobs.
- the udf_sub_lang_* stats for the underlying operation statuses of the udf calls triggered by those jobs.

udf_replica_writes

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of UDF replica writes. This statistic is not reported anymore as of version 3.9.

udf_scan_rec_reqs

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of record UDF calls in a scan background udf job. Refer to scan_udf_bg_complete, scan_udf_bg_abort and scan_udf_bg_error at the namespace level as of version 3.9.

Additional information
note

As of version 3.9, UDF related statistics for scan background udf and query background udf jobs have been moved to the namespace level and broken down as follows:
- the scan_udf_bg_abort/complete/error and query_udf_bg_success and query_udf_bg_failure for the overall job.
- the udf_sub_udf_* stats for the udf calls triggered by those jobs.
- the udf_sub_lang_* stats for the underlying operation statuses of the udf calls triggered by those jobs.

udf_write_err_others

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of unsuccessful UDF write operations.

Additional information
note

As of version 3.9, UDF related statistics have been moved to the namespace level and broken down as follows:
- the client_udf_* stats for the udf call itself.
- the client_lang_* stats for the underlying operation statuses.
As of version 4.5.1, stats for proxied udf calls are also included:
- the from_proxy_udf_* stats for proxied udf calls themselves.
- the from_proxy_lang_* stats for the underlying operation statuses for proxied udf calls.
Similarly, for scan background udf and query background udf jobs:
- the scan_udf_bg_abort/complete/error and query_udf_bg_success and query_udf_bg_failure for the overall job.
- the udf_sub_udf_* stats for the udf calls triggered by those jobs.
- the udf_sub_lang_* stats for the underlying operation statuses of the udf calls triggered by those jobs.

udf_write_reqs

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of UDF write requests attempted. Refer to the new stats breakdown at the namespace level as of version 3.9: client_lang_write_success and client_lang_error/client_lang_timeout for the underlying operation itself and client_udf_* for the udf call itself.

Additional information
note

As of version 3.9, UDF related statistics have been moved to the namespace level and broken down as follows:
- the client_udf_* stats for the udf call itself.
- the client_lang_* stats for the underlying operation statuses.
As of version 4.5.1, stats for proxied udf calls are also included:
- the from_proxy_udf_* stats for proxied udf calls themselves.
- the from_proxy_lang_* stats for the underlying operation statuses for proxied udf calls.
Similarly, for scan background udf and query background udf jobs:
- the scan_udf_bg_abort/complete/error and query_udf_bg_success and query_udf_bg_failure for the overall job.
- the udf_sub_udf_* stats for the udf calls triggered by those jobs.
- the udf_sub_lang_* stats for the underlying operation statuses of the udf calls triggered by those jobs.

udf_write_success

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of successful UDF write operations. Refer to the client_lang_write_success stat at the namespace level as of version 3.9.

Additional information
note

As of version 3.9, UDF related statistics have been moved to the namespace level and broken down as follows:
- the client_udf_* stats for the udf call itself.
- the client_lang_* stats for the underlying operation statuses.
As of version 4.5.1, stats for proxied udf calls are also included:
- the from_proxy_udf_* stats for proxied udf calls themselves.
- the from_proxy_lang_* stats for the underlying operation statuses for proxied udf calls.
Similarly, for scan background udf and query background udf jobs:
- the scan_udf_bg_abort/complete/error and query_udf_bg_success and query_udf_bg_failure for the overall job.
- the udf_sub_udf_* stats for the udf calls triggered by those jobs.
- the udf_sub_lang_* stats for the underlying operation statuses of the udf calls triggered by those jobs.

uptime

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Time in seconds since last server restart.

Additional information

Example:

IF uptime is below 300 and the cluster is not undergoing maintenance
THEN this node restarted within the last 5 minutes. Advise operations to investigate.

used-bytes-disk

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Size of disk in use measured in bytes. Refer to device_used_bytes at the namespace level as of version 3.9.

Additional information

Example:

Trending provides operations insight into how used-bytes-disk changes over time.

used-bytes-memory

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Size of memory in use measured in bytes. Refer to memory_used_bytes at the namespace level as of version 3.9.

waiting_transactions

[instantaneous][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of read/write transactions currently queued.

write_master

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of master writes performed by this node.

write_prole

[cumulative][integer]
Location:

Statistics

Monitoring:

optional

Removed:

3.9

Number of prole (replica) writes performed by this node.

XDR

cur_throughput

[enterprise][instantaneous][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.2.7

Removed:

3.9

Records per second throughput XDR is currently able to ship. Replaced by xdr_throughput as of version 3.9.

Additional information

Example:

IF cur_throughput drops below expected XDR throughput THEN alert operations, may indicate a problem with inter-cluster connectivity.

dlog_free_pct

[enterprise][instantaneous][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.9

Removed:

5.0.0

Percentage of the digest log free and available for use.

dlog_logged

[enterprise][cumulative][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.9

Removed:

5.0.0

Number of records logged into digest log.

Additional information

Example:

Trending stat_recs_logged allows operations insight into how many records are being enqueued for shipment over time.

dlog_overwritten_error

[enterprise][cumulative][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.9

Removed:

5.0.0

Number of digest log entries that got overwritten.

[enterprise][cumulative][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.9

Removed:

5.0.0

Number of linkdown that were processed.

dlog_processed_main

[enterprise][cumulative][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.9

Removed:

5.0.0

Number of records processed on the local Aerospike server.

dlog_processed_replica

[enterprise][cumulative][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.9

Removed:

5.0.0

Number of records processed for a node in the cluster that is not the local node.

dlog_relogged

[enterprise][cumulative][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.9

Removed:

5.0.0

Number of records relogged by this node into the digest log due to temporary issues when attempting to ship. A relogged digest log entry would be caused by one of three potential conditions: - An issue with the local client when attempting to ship (tracked by xdr_ship_source_error). - An issue with the network or the destination cluster itself (tracked by xdr_ship_destination_error). - An issue when reading the record on the local node(tracked by xdr_read_error), but those would actually end up relogged on the node now owning the record (see xdr_relogged_outgoing).

Additional information

The XDR component typically processes only master record's digest log entries on a given node (the exception being during failed node processing, when a node on the source cluster has failed). When relogging such master record's dlog entry, the corresponding prole copy would also be relogged on the respective node holding the replicas. This would increment the xdr_relogged_outgoing statistic on the current node and the xdr_relogged_incoming on the receiving node. It is therefore expected to see the dlog_relogged and xdr_relogged_outgoing statistics matching for clusters that are stable (no migrations).

The relogs happening due to master partition ownership changes (migrations) are also tracked through xdr_relogged_incoming and xdr_relogged_outgoing.

Permanent errors will not be relogged but will have a WARNING log message at the destination cluster (for example, to name a few, invalid namespace, record too big if mismatched write-block-size between source and destination, authentication or permission error).

Some Permanent Errors: AEROSPIKE_ERR_RECORD_TOO_BIG, AEROSPIKE_ERR_REQUEST_INVALID, AEROSPIKE_ERR_ALWAYS_FORBIDDEN.
Some Transient Errors: AEROSPIKE_ERR_SERVER, AEROSPIKE_ERR_CLUSTER_CHANGE, AEROSPIKE_ERR_SERVER_FULL, AEROSPIKE_ERR_CLUSTER, AEROSPIKE_ERR_RECORD_BUSY, AEROSPIKE_ERR_DEVICE_OVERLOAD, AEROSPIKE_ERR_FAIL_FORBIDDEN.

Refer to the C client errors for the exhaustive list.

dlog_used_objects

[enterprise][instantaneous][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.9

Removed:

5.0.0

Total number of records slots used in the digest log.

err_ship_client

[enterprise][cumulative][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.2.7

Removed:

3.9

Replaced with xdr_ship_source_error as of version 3.9. Number of client layer errors while shipping records. Errors include timeout, bad network fd, etc.

err_ship_server

[enterprise][cumulative][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.2.7

Removed:

3.9

Replaced with xdr_ship_destination_error as of version 3.9. Number of errors from the remote cluster(s) while shipping records. Errors include out-of-space, key-busy, etc.

esmt-bytes-shipped

[enterprise][cumulative][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.2.7

Removed:

3.9

Estimated number of bytes XDR has shipped to remote clusters. Renamed to xdr_ship_bytes as of version 3.9.

esmt-bytes-shipped-compression

[enterprise][cumulative][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.2.7

Removed:

3.8.1

Estimated number of bytes shipped in compressed format.

esmt-ship-compression

[enterprise][cumulative][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.2.7

Removed:

3.8.1

Estimated compression ratio used to determine how beneficial compression is (higher is better). Refer to xdr_ship_compression_avg_pct as of version 3.9.

free-dlog-pct

[enterprise][instantaneous][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.2.7

Removed:

3.9

Percentage of the digest log free and available for use. Replaced by dlog_free_pct as of version 3.9.

hotkeys_fetched

[enterprise][cumulative][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.8.3

Removed:

3.9

Should be interpreted in conjunction with noship_recs_hotkey. Renamed to xdr_hotkey_fetch as of version 3.9. Replaces noship_recs_hotkey_timeout. If there are hot keys in the system (same record updated quite frequently), xdr optimizes by not shipping all the updates. This stat represents the number of record's digest that are actually shipped because their cache entries expired and were dirty. Should be interpreted in conjunction with noship_recs_hotkey. The timeout of the cache entries is controlled by xdr-hotkey-time-ms.

latency_avg_dlogread

[enterprise][instantaneous][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.2.7

Removed:

3.9

Moving average latency in milliseconds to read from the digest log.

latency_avg_dlogwrite

[enterprise][instantaneous][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.2.7

Removed:

3.9

Moving average latency in milliseconds to write to the digest log.

latency_avg_ship

[enterprise][instantaneous][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.2.7

Removed:

3.9

Replaced with xdr_ship_latency_avg as of version 3.9. Moving average latency in milliseconds to ship a record to remote Aerospike clusters.

local_recs_error

[enterprise][cumulative][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.2.7

Removed:

3.9

Number of records fetched from local Aerospike server. Refer to xdr_read_error as of version 3.9.

local_recs_fetch_avg_latency

[enterprise][instantaneous][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.2.7

Removed:

3.9

Moving average latency in milliseconds to read a record/batch of records from local Aerospike server. Refer to xdr_read_latency_avg as of version 3.9.

local_recs_fetched

[enterprise][cumulative][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.2.7

Removed:

3.9

Number of records fetched from local Aerospike server. Refer to xdr_read_success as of version 3.9.

local_recs_migration_retry

[enterprise][cumulative][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.2.7

Number of records missing in a batch call, generally a result of migrations, but can also be caused by expiration and eviction.

local_recs_notfound

[enterprise][cumulative][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.2.7

Removed:

3.9

Number of record fetch attempts to the local Aerospike server that resulted in 'Not Found' response code. Refer to xdr_read_notfound as of version 3.9.

noship_recs_dup_intrabatch

[enterprise][cumulative][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.5.0

Removed:

3.8.1

Replaced by noship_recs_hotkey and noship_recs_hotkey_timeout as of 3.8.1. If there are hot keys in the system (same record updated quite frequently), xdr optimizes by not shipping all the updates. First level of optimization is to find the duplicate digests in a read batch. This counter indicates how many such skips were done. A second level of optimization is done across read batches and can be configured using the xdr-hotkey-maxskip configuration parameter.

noship_recs_expired

[enterprise][cumulative][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.2.7

Removed:

3.9

Number of records not shipped because the record expired before XDR was able to ship it.

noship_recs_genmismatch

[enterprise][cumulative][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.2.7

Removed:

3.8.1

Replaced by noship_recs_hotkey and noship_recs_hotkey_timeout as of 3.8.1. Number of records not shipped to remote Aerospike cluster because of a Generation Mismatch between the digestlog and the local Aerospike server (controlled by xdr-hotkey-maxskip).

noship_recs_hotkey

[enterprise][cumulative][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.8.1

Removed:

3.9

Replaces noship_recs_dup_intrabatch and noship_recs_genmismatch. Renamed to xdr_hotkey_skip as of version 3.9. If there are hot keys in the system (same record updated quite frequently), xdr optimizes by not shipping all the updates. This stat represents the number of record's digests that are skipped due to an already existing entry in the reader's thread cache (meaning a version of this record was just shipped). Should be interpreted in conjunction with noship_recs_hotkey_timeout. The timeout of the cache entries is controlled by xdr-hotkey-time-ms.

noship_recs_hotkey_timeout

[enterprise][cumulative][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.8.1

Removed:

3.8.3

Replaces noship_recs_dup_intrabatch and noship_recs_genmismatch. Replaced by hotkeys_fetched in 3.8.3. If there are hot keys in the system (same record updated quite frequently), xdr optimizes by not shipping all the updates. This stat represents the number of record's digest that are actually shipped because their cache entries expired and were dirty. Should be interpreted in conjunction with noship_recs_hotkey. The timeout of the cache entries is controlled by xdr-hotkey-time-ms.

noship_recs_notmaster

[enterprise][cumulative][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.2.7

Removed:

3.9

Number of records in the digest log that were not shipped because the local node is not the master node for these records.

noship_recs_uninitialized_destination

[enterprise][cumulative][integer]
Location:

XDR

Monitoring:

optional

Removed:

3.9

Replaced with xdr_uninitialized_destination_error as of version 3.9. Number of records in the digest log not shipped because the destination cluster has not been initialized.

noship_recs_unknown_namespace

[enterprise][cumulative][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.2.7

Removed:

3.9

Replaced with xdr_unknown_namespace_error as of version 3.9. Number of records in the digest log not shipped because they belong to an unknown namespace (should never happen).

perdc_timediff_lastship_cur_secs

[enterprise][instantaneous][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.2.7

Removed:

3.8.1

Time lag of each remote DC compared to the local cluster represented as a comma separated list of values. Replaced by dc_timelag in DC stats as of version 3.8.1.

stat_dlog_fread

[enterprise][cumulative][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.2.7

Removed:

3.9

Number of fread calls on digest log.

stat_dlog_fseek

[enterprise][cumulative][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.2.7

Removed:

3.9

Number of fseek calls on digest log.

stat_dlog_fwrite

[enterprise][cumulative][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.2.7

Removed:

3.9

Number of fwrite calls on digest log.

stat_dlog_read

[enterprise][cumulative][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.2.7

Removed:

3.9

Number of logical reads from digest log.

stat_dlog_write

[enterprise][cumulative][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.2.7

Removed:

3.9

Number of logical writes to the digest log.

stat_pipe_reads_diginfo

[enterprise][cumulative][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.2.7

Number of digest information read from the named pipe.

stat_recs_dropped

[enterprise][cumulative][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.2.7

Removed:

3.9

Number of log records read from named pipe but couldn't be written to digest log because the writer's queue is full.

stat_recs_localprocessed

[enterprise][cumulative][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.2.7

Removed:

3.9

Number of records processed on the local Aerospike server. Renamed to dlog_processed_main as of version 3.9.

stat_recs_logged

[enterprise][cumulative][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.2.7

Removed:

3.9

Number of records logged into digest log. Renamed to dlog_logged as of version 3.9.

Additional information

Example:

Trending stat_recs_logged allows operations insight into how many records are being enqueued for shipment over time.

stat_recs_outstanding

[enterprise][instantaneous][integer]
Location:

XDR

Monitoring:

optional

Introduced:

3.2.7

Removed:

3.9

Number of outstanding records not yet processed by the main thread. Renamed to xdr_ship_outstanding_objects as of version 3.9.