Aerospike Admin Live Cluster Mode Commands
Help
The help
command provides a brief description of all supported commands. You can
provide the name of a command to print help for that specific command.
In the example below, we request help for the info
command.
Admin> help info
A collection of commands that display summary tables for various aspects of
Aerospike namespaces.
Usage: info COMMAND
or
Usage: info [with node1 [node2 [...]]]
with - Show results from specified nodes. Acceptable values are
ip:port, node-id, or FQDN
Default: all
Commands:
Default Displays network, namespace, and xdr summary information
namespace A collection of commands that display summary tables for various aspects of Aerospike namespaces
network Displays network information for the cluster
set Displays summary information for each set
sindex Displays summary information for Secondary Indexes
xdr Displays summary information for each datacenter
Run 'help info COMMAND' for more information on a command.
Disable
Introduced: 2.1.0
The disable
command exits privileged mode. This is useful
for keeping an administrator from inadvertently executing commands that could
alter the state of the Aerospike Cluster in undesirable ways.
Enable
Introduced: 2.1.0
The enable
command enters privileged mode which allows you
to execute manage
and asinfo
commands. Privileged mode was created to keep
users aware that the commands being executed can have undesirable consequences if
used incorrectly. Additionally, you can use the --warn
option to receive a warning
when you run a command that might have unintended consequences. The
warning will present the user with a 6 character hexadecimal string that must be entered
before the command runs.
Example overwriting a udf module without the --warn
flag:
Admin> enable
Admin+>
Admin+> manage udf add test.lua path path/to/test.lua
Successfully added UDF test.lua
Admin+> disable
Example overwriting a udf module with the --warn
flag:
Admin>
Admin> enable --warn
Admin+> manage udf add test.lua path path/to/test.lua
You are about to write over an existing UDF module.
Confirm that you want to proceed by typing 48b015, or anything else to cancel.
48b015
Successfully added UDF test.lua
Admin+> disable
Admin>
Info
Commands within info
provide diagnostic information in a concise tabular
format. Without additional arguments info
will execute network,
namespace, and xdr sub-commands. Command descending from
info
will alert you of potential cluster issues by coloring suspicious test
red. You will also notice that one node name is always green, this node is the
node expected to be the Paxos Principal node. For namespace and set
subcommands, it displays extra rows in blue, which has sum of statistics per
namespace and set.
Info network
The info network
command primarily serves as a translation table between
Node name, Node ID, and IP. It also provides cluster stats such as the
cluster size, cluster key, number of client connections and uptime for each server.
Also it displays cluster name for server 3.10 and above.
Under the Node Id column asadm also indicate the node which is expected to be the Paxos Principal with an asterisks.
Example Output:
Admin> info network
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Network Information (2020-12-16 21:45:32 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Node| Node ID| IP| Build|Migrations|~~~~~~~~~~~~~~~~~~Cluster~~~~~~~~~~~~~~~~~~|Client| Uptime
| | | | |Size| Key|Integrity| Principal| Conns|
10.0.0.1:3000| BB9010016AE4202| 10.0.0.1:3000|C-5.3.0.1| 0.000 | 5|92DCF600367B|True |BB9050016AE4202| 2|00:07:48
10.0.0.2:3000| BB9020016AE4202| 10.0.0.2:3000|C-5.3.0.1| 0.000 | 5|92DCF600367B|True |BB9050016AE4202| 2|00:07:47
10.0.0.3:3000| BB9030016AE4202| 10.0.0.3:3000|C-5.3.0.1| 0.000 | 5|92DCF600367B|True |BB9050016AE4202| 2|00:07:46
10.0.0.4:3000| BB9040016AE4202| 10.0.0.4:3000|C-5.3.0.1| 0.000 | 5|92DCF600367B|True |BB9050016AE4202| 3|00:07:46
10.0.0.5:3000|*BB9050016AE4202| 10.0.0.5:3000|C-5.3.0.1| 0.000 | 5|92DCF600367B|True |BB9050016AE4202| 3|00:07:45
Number of rows: 5
Info namespace
The info namespace
command displays a summary of important namespace
statistics for each namespace defined on each node ordered by Namespace and
Node. It displays an extra row which is an aggregate of some of the statistics.
When the primary index or secondary index is stored on device (not shmem) extra
usage statistics are displayed similar to the "Memory" columns in the following table.
From 0.1.14 onwards it displays information in two separate tables:
- Usage: Namespace usage related details
- Object: Namespace object related details
Example Output:
Admin> info namespace
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Namespace Usage Information (2023-03-21 23:44:05 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Namespace| Node|Evictions| Stop|~Device~|~~~~~~~~~~~~Memory~~~~~~~~~~~|~Primary Index~~|~Secondary Index~
| | |Writes| HWM%| Used| Used%| HWM%| Stop%| Type| Used| Type| Used
bar |172.17.0.3:3000| 0.000 |False | 0.0 %| 0.000 B | 0.0 %|0.0 %|90.0 %|shmem| 0.000 B |shmem | 0.000 B
bar |172.17.0.4:3000| 0.000 |False | 0.0 %| 0.000 B | 0.0 %|0.0 %|90.0 %|shmem| 0.000 B |shmem | 0.000 B
bar |172.17.0.5:3000| 0.000 |False | 0.0 %| 0.000 B | 0.0 %|0.0 %|90.0 %|shmem| 0.000 B |shmem | 0.000 B
bar | | 0.000 | | | 0.000 B | 0.0 %| | | | 0.000 B | | 0.000 B
test |172.17.0.3:3000| 0.000 |False | 0.0 %|16.169 MB|0.39 %|0.0 %|90.0 %|shmem|103.125 KB|shmem | 16.000 MB
test |172.17.0.4:3000| 0.000 |False | 0.0 %|16.164 MB|0.39 %|0.0 %|90.0 %|shmem| 99.625 KB|shmem | 16.000 MB
test |172.17.0.5:3000| 0.000 |False | 0.0 %|16.179 MB|0.39 %|0.0 %|90.0 %|shmem|108.812 KB|shmem | 16.000 MB
test | | 0.000 | | |48.511 MB|0.39 %| | | |311.562 KB| | 48.000 MB
Number of rows: 6
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Namespace Object Information (2023-03-21 23:44:05 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Namespace| Node|Rack| Repl|Expirations| Total|~~~~~~~~~~Objects~~~~~~~~~~|~~~~~~~~~Tombstones~~~~~~~~|~~~~Pending~~~~
| | ID|Factor| |Records| Master| Prole|Non-Replica| Master| Prole|Non-Replica|~~~~Migrates~~~
| | | | | | | | | | | | Tx| Rx
bar |172.17.0.3:3000| 0| 2| 0.000 |0.000 |0.000 |0.000 | 0.000 |0.000 |0.000 | 0.000 |0.000 |0.000
bar |172.17.0.4:3000| 0| 2| 0.000 |0.000 |0.000 |0.000 | 0.000 |0.000 |0.000 | 0.000 |0.000 |0.000
bar |172.17.0.5:3000| 0| 2| 0.000 |0.000 |0.000 |0.000 | 0.000 |0.000 |0.000 | 0.000 |0.000 |0.000
bar | | | | 0.000 |0.000 |0.000 |0.000 | 0.000 |0.000 |0.000 | 0.000 |0.000 |0.000
test |172.17.0.3:3000| 0| 1| 0.000 |1.650 K|1.650 K|0.000 | 0.000 |0.000 |0.000 | 0.000 |0.000 |0.000
test |172.17.0.4:3000| 0| 1| 0.000 |1.594 K|1.594 K|0.000 | 0.000 |0.000 |0.000 | 0.000 |0.000 |0.000
test |172.17.0.5:3000| 0| 1| 0.000 |1.741 K|1.741 K|0.000 | 0.000 |0.000 |0.000 | 0.000 |0.000 |0.000
test | | | | 0.000 |4.985 K|4.985 K|0.000 | 0.000 |0.000 |0.000 | 0.000 |0.000 |0.000
Number of rows: 6
Info set
(Introduced: 0.0.15)
The info set
command displays a summary of important set
statistics for each set defined on each namespace on all nodes ordered by Set and
Namespace. If configured, it displays details about your storage quotas. It includes an extra row which displays an aggregate of grouped rows.
Example Output:
Admin+> info set
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Set Information (2023-03-21 23:18:54 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Namespace| Set| Node| Memory| Disk|~~~~~~Quota~~~~~~~| Objects| Stop| Disable| Set
| | | Used| Used| Total| Used%| |Writes|Eviction|Index
| | | | | | | | Count| |
test |testset|172.17.0.3:3000| 37.534 KB|0.000 B | 48.828 KB|76.87 %|882.000 | 0|False |No
test |testset|172.17.0.4:3000| 37.326 KB|0.000 B | 48.828 KB|76.44 %|877.000 | 0|False |No
test |testset|172.17.0.5:3000| 38.353 KB|0.000 B | 48.828 KB|78.55 %|901.000 | 0|False |No
test |testset| |113.213 KB|0.000 B |146.484 KB|77.29 %| 2.660 K| | |
test |ufodata|172.17.0.3:3000| 32.640 KB|0.000 B | 0.000 B | --|768.000 | 0|False |No
test |ufodata|172.17.0.4:3000| 30.479 KB|0.000 B | 0.000 B | --|717.000 | 0|False |No
test |ufodata|172.17.0.5:3000| 35.700 KB|0.000 B | 0.000 B | --|840.000 | 0|False |No
test |ufodata| | 98.818 KB|0.000 B | 0.000 B | 0.0 %| 2.325 K| | |
Number of rows: 6
Further statistics for a set can be displayed using the show statistics
command for specific sets:
Admin> show statistics sets for test1 testset
Info sindex
The info sindex
command displays a summary of important sindex
statistics for each sindex defined on each namespace on all nodes ordered by Sindex and
Node.
Example Output:
Admin> info sindex
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Secondary Index Information (2020-12-16 23:10:06 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Index Name|Namespace| Set| Node| Bins| Bin|State|Keys|~~~~~~~~~~Entries~~~~~~~~~~|~~~~Storage~~~~~|~~~~Queries~~~~~|~~~~Updates~~~~~| Context
| | | | | Type| | | Total| Avg Per| Avg Per| Type| Used|Requests|Avg Num| Writes|Deletes|
| | | | | | | | | Rec| Bin Val| | | Recs| | | |
name-sindex|bar |testset|10.0.0.1:3000| name|STRING|RW | 2| 1.000 K | 1.000 | 0.500 K |shmem| 16.000 MB| 0.000 |0.000 | 5.000 |0.000 |--
name-sindex|bar |testset|10.0.0.3:3000| name|STRING|RW | 2| 1.000 K | 1.000 | 0.500 K |shmem| 16.000 MB| 0.000 |0.000 | 5.000 |0.000 |--
name-sindex|bar |testset|10.0.0.4:3000| name|STRING|RW | 2| 1.000 K | 1.000 | 0.500 K |shmem| 16.000 MB| 0.000 |0.000 | 3.000 |0.000 |--
name-sindex|bar |testset|10.0.0.5:3000| name|STRING|RW | 2| 1.000 K | 1.000 | 0.500 K |shmem| 16.000 MB| 0.000 |0.000 | 4.000 |0.000 |--
name-sindex|bar |testset|10.0.0.6:3000| name|STRING|RW | 2| 1.000 K | 1.000 | 0.500 K |shmem| 16.000 MB| 0.000 |0.000 | 3.000 |0.000 |--
|bar |testset| | | | | | 5.000 K | | 2.500 K |shmem| 80.000 MB| 0.000 |0.000 |20.000 |0.000 |--
age-sindex |test |testset|10.0.0.3:3000| age|STRING|RW | 0| 0.000 | 1.000 | 0.000 |shmem| 16.000 MB| 0.000 |0.000 | 0.000 |0.000 |[list_index(-1), map_key(<string#11>)]
age-sindex |test |testset|10.0.0.1:3000| age|STRING|RW | 0| 0.000 | 1.000 | 0.000 |shmem| 16.000 MB| 0.000 |0.000 | 0.000 |0.000 |[list_index(-1), map_key(<string#11>)]
age-sindex |test |testset|10.0.0.4:3000| age|STRING|RW | 0| 0.000 | 1.000 | 0.000 |shmem| 16.000 MB| 0.000 |0.000 | 0.000 |0.000 |[list_index(-1), map_key(<string#11>)]
age-sindex |test |testset|10.0.0.5:3000| age|STRING|RW | 0| 0.000 | 1.000 | 0.000 |shmem| 16.000 MB| 0.000 |0.000 | 0.000 |0.000 |[list_index(-1), map_key(<string#11>)]
age-sindex |test |testset|10.0.0.6:3000| age|STRING|RW | 0| 0.000 | 1.000 | 0.000 |shmem| 16.000 MB| 0.000 |0.000 | 0.000 |0.000 |[list_index(-1), map_key(<string#11>)]
|test |testset| | | | | | 0.000 | 1.000 | 0.000 |shmem| 80.000 MB| 0.000 |0.000 | 0.000 |0.000 |[list_index(-1), map_key(<string#11>)]
Number of rows: 10
Further statistics for a secondary index can be displayed using the show statistics
command for specific sindex:
Admin> show statistics sindex for test test_str_idx
Info xdr
The info xdr
command shows the current performance characteristics of XDR on each node.
The info xdr
command supports filtering by datacenter using the for
modifier.
Example Output:
Admin> info xdr for DC1
~~~~~~~~~~~~~~~~~~~~XDR Information DC1 (2020-12-17 00:11:48 UTC)~~~~~~~~~~~~~~~~~~~~
Node|Success|~~~~~~~~Retry~~~~~~~~~|Recoveries| Lag| Avg|Throughput
| |Connection|Destination| Pending|(hh:mm:ss)|Latency| (rec/s)
| | Reset| | | | (ms)|
10.0.0.3:3000| 224| 0| 0| 0| 00:00:00| 0| 1078
10.0.0.5:3000| 206| 0| 0| 0| 00:00:00| 0| 970
| | | | 0| | 0|
Number of rows: 2
Info dc
(Introduced: 0.0.16)
The info dc
command displays a summary of important datacenter
statistics for each datacenter.
This feature is replaced by info xdr on servers versions 5.0 and above.
Example Output:
Admin> info dc
~~~~~~~~~~~~~~~~~~~~~DC Information (2020-12-18 18:12:25 UTC)~~~~~~~~~~~~~~~~~~~~~
Node| DC| DC Type|Namespaces| Lag|Records| Avg| Status
| | | | |Shipped|Latency|
| | | | | | (ms)|
10.0.0.1:3000|aerospike_b|aerospike|test |00:00:00| 44452| 50|CLUSTER_UP
10.0.0.2:3000|aerospike_b|aerospike|test |00:00:00| 45307| 52|CLUSTER_UP
10.0.0.1:3000|aerospike_c|aerospike|test |00:00:00| 44452| 54|CLUSTER_UP
10.0.0.2:3000|aerospike_c|aerospike|test |00:00:00| 45307| 56|CLUSTER_UP
Number of rows: 4
Show
The show
commands generally provide a very verbose output about the
requested component. Most commands support the like modifier. All commands
support the with modifier with the exceptions of show users
, show roles
,
show udfs
, and show sindex
which only make requests to the principal node.
Best Practices
(Introduced: 2.4.0)
This command is supported in server v. 5.7 and later.
The show best-practices
command is used to display violations of Aerospike's best-practices.
An explanation of each of Aerospike's best-practices can be found at Best-Practices
In the example below node BB9010016AE4202 is violating two best-practices, swappiness
and thp-enabled
, which will be displayed in red. Node
BB9030016AE4202, BB9040016AE4202 are not violating any best-practices and display ok
in green.
Admin> show best-practices
~Best Practices (2021-09-21 23:55:09 UTC)~
Node|Response
BB9010016AE4202|swappiness, thp-enabled
BB9030016AE4202|ok
BB9040016AE4202|ok
Number of rows: 3
Following Aerospike's best-practices are required for optimal stability and performance.
Configuration
The show config
command is used to display Aerospike configuration
settings. By default the command will list all server configuration parameters for security (added v. 7.0, otherwise joined with service), service, network, and namespace.
You may also provide one of the sub-commands: xdr, security, service, network, and namespace to limit the output to a specific context.
In the example below, we request all network configuration parameters containing the words heartbeat or mesh.
Admin> show config network like heartbeat mesh
~~~~~~~~~~~~~~~~~~~~~~~~~Network Configuration (2020-12-17 01:07:36 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Node |10.0.0.1:3000|10.0.0.2:3000|10.0.0.4:3000|10.0.0.5:3000|10.0.0.6:3000
heartbeat.connect-timeout-ms|500 |500 |500 |500 |500
heartbeat.interval |150 |150 |150 |150 |150
heartbeat.mode |multicast |multicast |multicast |multicast |multicast
heartbeat.mtu |1500 |1500 |1500 |1500 |1500
heartbeat.multicast-group |239.1.99.200 |239.1.99.200 |239.1.99.200 |239.1.99.200 |239.1.99.200
heartbeat.port |9918 |9918 |9918 |9918 |9918
heartbeat.protocol |v3 |v3 |v3 |v3 |v3
heartbeat.timeout |10 |10 |10 |10 |10
Number of rows: 9
We can use the diff modifier with show config
commands. To show differences
between node configurations.
Example Output:
Admin> show config diff
~~~~~~~~~~~~~~~~~~~~~~Service Configuration (2020-12-17 01:09:07 UTC)~~~~~~~~~~~~~~~~~~~~~~
Node | 10.0.0.1:3000| 10.0.0.2:3000| 10.0.0.4:3000
pidfile|/var/run/aerospike/asd0.pid|/var/run/aerospike/asd1.pid|/var/run/aerospike/asd2.pid
Number of rows: 2
~~~~~~~~~Network Configuration (2020-12-17 01:09:07 UTC)~~~~~~~~~
Node | 10.0.0.1:3000| 10.0.0.2:3000| 10.0.0.4:3000
heartbeat-address|192.168.120.110|192.168.120.112|192.168.120.113
Number of rows: 2
~~~~~~~~test Namespace Configuration (2020-12-17 01:09:07 UTC)~~~~~~~~~
Node |10.0.0.1:3000|10.0.0.2:3000|10.0.0.4:3000
migrate-rx-partitions-initial|4036 |3904 |3614
migrate-tx-partitions-initial|3362 |4096 |4096
Number of rows: 3
~bar Namespace Configuration (2020-12-17 01:09:07 UT~
Node|10.0.0.1:3000|10.0.0.2:3000|10.0.0.4:3000
Number of rows: 1
For large clusters we can use -flip
option to flip output table for simplicity and ease of understanding.
Example Output:
Admin> show config namespace like partition -flip
~test Namespace Configuration (2020-12-17 01:19:14 UTC)~~
Node|partition-tree-sprigs|sindex.num-partitions
10.0.0.1:3000| 256| 32
10.0.0.2:3000| 256| 32
10.0.0.4:3000| 256| 32
10.0.0.5:3000| 256| 32
10.0.0.6:3000| 256| 32
Number of rows: 5
~~bar Namespace Configuration (2020-12-17 01:19:14 UTC)~~
Node|partition-tree-sprigs|sindex.num-partitions
10.0.0.1:3000| 256| 32
10.0.0.2:3000| 256| 32
10.0.0.4:3000| 256| 32
10.0.0.5:3000| 256| 32
10.0.0.6:3000| 256| 32
Number of rows: 5
XDR Configuration
The show config xdr
command displays all the available configuration information related to XDR. By default,
this command displays XDR configuration, XDR datacenter configuration, and XDR namespace configuration. You may also provide one of the sub-commands: dc, namespace, and filter, to limit the output to a specific context. For example, to see configuration parameters for only namespace, use show config xdr namespace
. All of the commands support the use of the for
, like
, and diff
modifier.
show config xdr
subcommands dc, namespace, and filter were added in asadm Tools package 8.2 (asadm 2.13).
The show config xdr dc
command displays a new table for each configured datacenter. The command also supports the for
modifier to filter by datacenter.
In the following example we get XDR datacenter configuration parameters that contain "max" for datacenter dc2:
Admin> show config xdr dc for dc2 like max
~~~~~~~~~XDR dc2 DC Configuration (2023-02-16 22:37:00 UTC)~~~~~~~~~
Node |10.0.0.4:3000|10.0.0.5:3000|10.0.0.6:3000
max-recoveries-interleaved|0 |0 |0
max-used-service-threads |0 |0 |0
Number of rows: 3
The show config xdr namespace
command displays a new table for each configured xdr namespace. The command also supports the for
modifier to filter first by namespace and then by datacenter.
In the following example we get XDR namespace configuration parameters that contain "sets" for datacenter dc2:
Admin> show config xdr namespace for test dc2 like sets
~~~~XDR test Namespace Configuration (2023-02-16 22:41:12 UTC)~~~~
Datacenter |dc2 |dc2 |dc2
Node |10.0.0.4:3000|10.0.0.5:3000|10.0.0.6:3000
ignored-sets |testset |testset |testset
ship-only-specified-sets|false |false |false
shipped-sets | | |
Number of rows: 5
The show config xdr filter
command displays the xdr-filters that are set for a given namespace and datacenter. The command also supports the for
modifier to filter first by datacenter and then by namespace.
Admin> show config xdr filter
~~~~~~~~~~~~~~~~~~~~~~~XDR Filters (2023-02-16 22:55:02 UTC)~~~~~~~~~~~~~~~~~~~~~~~
Namespace|Datacenter| Base64 Expression| Expression
bar |dc2 |null |null
test |dc2 |kxGRSJMEk1ECo2FnZRU|or(is_tombstone(), ge(bin_int("age"), 21))
Number of rows: 2
Distribution
The show distribution
command displays histograms
and supports object_size and time_to_live histograms. For server version 3.7.5 and below, it displays
eviction histogram also.
For object_size, the -b
parameter can be used to get bytewise distribution. For server versions 4.1.0.1 and below,
the -k
option helps to set the maximum number of buckets to show.
In the below example we can see that 10 percent of our objects in test and bar are set to expire in 427100 and 425500 seconds respectively.
Admin> show distribution time_to_live
~~~~~~~~~~~~test - TTL Distribution in Seconds (2020-12-18 02:14:24 UTC)~~~~~~~~~~~
Percentage of records having ttl less than or equal to value measured in Seconds
Node| 10%| 20%| 30%| 40%| 50%| 60%| 70%| 80%| 90%| 100%
10.0.0.1:3000|427100|427100|427100|427100|427100|427100|427100|427100|427100|427100
10.0.0.2:3000|427100|427100|427100|427100|427100|427100|427100|427100|427100|427100
10.0.0.3:3000|427100|427100|427100|427100|427100|427100|427100|427100|427100|427100
10.0.0.4:3000|427100|427100|427100|427100|427100|427100|427100|427100|427100|427100
10.0.0.6:3000|427100|427100|427100|427100|427100|427100|427100|427100|427100|427100
Number of rows: 5
~~~~~~~~~~~~bar - TTL Distribution in Seconds (2020-12-18 02:14:24 UTC)~~~~~~~~~~~~
Percentage of records having ttl less than or equal to value measured in Seconds
Node| 10%| 20%| 30%| 40%| 50%| 60%| 70%| 80%| 90%| 100%
10.0.0.1:3000|425500|425500|425500|425500|425500|425500|425500|425500|425500|425500
10.0.0.2:3000|425500|425500|425500|425500|425500|425500|425500|425500|425500|425500
10.0.0.3:3000|425500|425500|425500|425500|425500|425500|425500|425500|425500|425500
10.0.0.4:3000|425500|425500|425500|425500|425500|425500|425500|425500|425500|425500
10.0.0.6:3000|425500|425500|425500|425500|425500|425500|425500|425500|425500|425500
Number of rows: 5
Jobs
(Introduced: 2.5.0)
Access Control Permissions: data-admin
The show jobs [scan|query|sindex-builder]
command displays current and past jobs running on the aerospike cluster and should be used
in conjunction with the manage jobs
controller. To make viewing easier, run the pager on
command first.
By default all job modules are shown. Each module table is organized in a number of ways for easier viewing. It groups the jobs by their Namespace
and Type
.
Groups are separated by horizontal dashes. Jobs are further organized left to right by their Progress %
and Time Since Done
.
Scan jobs will be displayed until they are evicted by another scan job.The maximum number of scan jobs stored per node is configurable with scan-max-done. In contrast, query jobs are only displayed while they running.
Note: SIndex-builder jobs were removed in server v. 5.7 and after.
Admin+> show jobs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Scan Jobs (2021-10-20 23:08:14 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Node |10.0.0.3:3000 |10.0.0.2:3000 |10.0.0.1:3000
Namespace |bar |bar |bar
Module |scan |scan |scan
Type |basic |basic |basic
Progress % |100.0 |100.0 |100.0
Transaction ID |1583278212325152813 |1554763604191518487 |1554763604191518487
Time Since Done |00:33:26 |00:34:42 |00:34:43
active-threads |0 |0 |0
from |10.0.22.1+52252 |10.0.22.1+34048 |10.0.22.1+40340
n-pids-requested |1.366 K |1.365 K |1.365 K
net-io-bytes |37.940 MB |8.505 MB |8.048 MB
priority |0 |0 |0
recs-failed |0.000 |0.000 |0.000
recs-filtered-bins|0.000 |0.000 |0.000
recs-filtered-meta|0.000 |0.000 |0.000
recs-succeeded |333.874 K |75.826 K |71.779 K
recs-throttled |333.874 K |75.826 K |71.779 K
rps |0.000 |0.000 |0.000
run-time |00:00:05 |00:00:01 |00:00:01
socket-timeout |00:00:30 |00:00:30 |00:00:30
status |done(ok) |done(abandoned-response-timeout)|done(abandoned-response-timeout)
-------------------------------------------------------------------------------------------------------------------
Node |10.0.0.3:3000 |10.0.0.2:3000 |10.0.0.1:3000
Namespace |test |test |test
Module |scan |scan |scan
Type |basic |basic |basic
Progress % |100.0 |100.0 |100.0
Transaction ID |17709699727074092152|17709699727074092152 |17709699727074092152
Time Since Done |00:47:59 |00:47:59 |00:47:59
active-threads |0 |0 |0
from |10.0.22.1+51868 |10.0.22.1+33716 |174.22.22.1+40008
n-pids-requested |1.366 K |1.365 K |1.365 K
net-io-bytes |438.377 KB |443.145 KB |442.441 KB
priority |0 |0 |0
recs-failed |0.000 |0.000 |0.000
recs-filtered-bins|0.000 |0.000 |0.000
recs-filtered-meta|0.000 |0.000 |0.000
recs-succeeded |3.308 K |3.349 K |3.343 K
recs-throttled |3.308 K |3.349 K |3.343 K
rps |0.000 |0.000 |0.000
run-time |00:00:00 |00:00:00 |00:00:00
socket-timeout |00:00:30 |00:00:30 |00:00:30
status |done(ok) |done(ok) |done(ok)
Number of rows: 42
~~~~~~~~~~~~~~~~~~~~~Query Jobs (2021-10-20 23:08:14 UTC)~~~~~~~~~~~~~~~~~~~~~
Node |10.0.0.1:3000 |10.0.0.3:3000 |10.0.0.2:3000
Namespace |bar |bar |bar
Module |query |query |query
Progress % |0.0 |0.0 |0.0
Transaction ID |2143237531128163351|2143237531128163351|2143237531128163351
Time Since Done |00:00:00 |00:00:00 |00:00:00
active-threads |0 |0 |0
net-io-bytes |2.400 MB |2.087 MB |2.681 MB
priority |10 |10 |10
recs-failed |0.000 |0.000 |0.000
recs-filtered-bins|0.000 |0.000 |0.000
recs-filtered-meta|0.000 |0.000 |0.000
recs-succeeded |32.558 K |29.274 K |36.467 K
recs-throttled |0.000 |0.000 |0.000
rps |0.000 |0.000 |0.000
run-time |00:00:07 |00:00:07 |00:00:07
set |testset |testset |testset
sindex-name |a-bar-index |a-bar-index |a-bar-index
socket-timeout |00:00:00 |00:00:00 |00:00:00
status |active |active |active
Number of rows: 20
Latencies
(Introduced: 0.7.0)
The show latencies
command displays latencies characteristics of reads, writes,
queries, replication, and udf.
This feature is fully supported on server 5.1 and above. Prior server version will have limited functionality.
We can change the number of latency buckets shown using parameter -b
. The exponential increment used to calculate the value
assigned to each latency bucket can be set using parameter -e
. If configurable benchmark histograms
are enabled they can be viewed using parameter -v
.
In the below example we look at verbose read latency with 8 buckets and latency increment of 2.
Admin> show latencies -v -b 8 -e 2 like read
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Latency (2020-12-17 19:18:25 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Namespace|Histogram| Node|ops/sec|>1ms|>4ms|>16ms|>64ms|>256ms|>1024ms|>4096ms|>16384ms
bar |read |10.0.0.1:3000|455.4 |1.36|0.07|0.02 |0.0 |0.0 |0.0 |0.0 |0.0
bar |read |10.0.0.2:3000|1047.1 |3.5 |0.16|0.02 |0.0 |0.0 |0.0 |0.0 |0.0
bar |read |10.0.0.4:3000|1203.3 |1.51|0.13|0.02 |0.0 |0.0 |0.0 |0.0 |0.0
bar |read |10.0.0.5:3000|1241.3 |3.25|0.15|0.0 |0.0 |0.0 |0.0 |0.0 |0.0
bar |read |10.0.0.6:3000|946.2 |0.42|0.0 |0.0 |0.0 |0.0 |0.0 |0.0 |0.0
| | |1241.3 |3.5 |0.16|0.02 |0.0 |0.0 |0.0 |0.0 |0.0
test |read |10.0.0.1:3000|1280.8 |1.52|0.11|0.01 |0.0 |0.0 |0.0 |0.0 |0.0
test |read |10.0.0.2:3000|841.6 |3.94|0.15|0.0 |0.0 |0.0 |0.0 |0.0 |0.0
test |read |10.0.0.4:3000|517.1 |0.19|0.0 |0.0 |0.0 |0.0 |0.0 |0.0 |0.0
test |read |10.0.0.5:3000|523.7 |0.31|0.0 |0.0 |0.0 |0.0 |0.0 |0.0 |0.0
test |read |10.0.0.6:3000|733.1 |0.45|0.05|0.0 |0.0 |0.0 |0.0 |0.0 |0.0
| | |1280.8 |3.94|0.15|0.01 |0.0 |0.0 |0.0 |0.0 |0.0
Number of rows: 10
In the below example we look at the latency of writes-master.
Admin> show latencies -v like write-master
~~~~~~~~~~~~~~Latency (2020-12-17 02:07:41 UTC)~~~~~~~~~~~~
Namespace| Histogram| Node|ops/sec|>1ms|>8ms|>64ms
test |write-master|10.0.0.1:3000|0.0 |0.0 |0.0 |0.0
test |write-master|10.0.0.2:3000|0.0 |0.0 |0.0 |0.0
test |write-master|10.0.0.4:3000|0.0 |0.0 |0.0 |0.0
test |write-master|10.0.0.5:3000|0.0 |0.0 |0.0 |0.0
test |write-master|10.0.0.6:3000|0.0 |0.0 |0.0 |0.0
| | |0.0 |0.0 |0.0 |0.0
Number of rows: 5
The show latencies
command supports for
modifier to display namespace wise latency. It also shows aggregate latency for input namespaces (filtered by for
) in blue.
This feature works only for server version 3.9 and above.
The following example shows query latency for test and bar namespaces which got filtered by for
input (te and b).
The rows without namespace name or histogram is aggregate latency. Though not visible here, these rows has blue font.
Admin> show latencies for te b like write
~~~~~~~~~~~~Latency (2020-12-17 02:43:52 UTC)~~~~~~~~~~~~
Namespace|Histogram| Node|ops/sec| >1ms|>8ms|>64ms
bar |write |10.0.0.1:3000|2314.0 |4.78 |0.06|0.0
bar |write |10.0.0.2:3000|2203.2 |26.16|0.31|0.0
bar |write |10.0.0.4:3000|1767.5 |4.43 |0.04|0.0
bar |write |10.0.0.5:3000|1525.3 |11.84|0.09|0.0
bar |write |10.0.0.6:3000|1484.8 |4.26 |0.05|0.0
| | |2314.0 |26.18|0.31|0.0
test |write |10.0.0.1:3000|0.0 |0.0 |0.0 |0.0
test |write |10.0.0.2:3000|0.0 |0.0 |0.0 |0.0
test |write |10.0.0.4:3000|126.7 |6.55 |0.32|0.0
test |write |10.0.0.5:3000|363.1 |13.99|0.11|0.0
test |write |10.0.0.6:3000|319.4 |9.89 |0.19|0.0
| | |363.1 |13.99|0.32|0.0
Number of rows: 10
Latency
(Introduced: 0.1.15)
(Removed: 0.7.0)
The show latency
command displays latency characteristics of reads, writes,
and proxies.
We can get latency for specific time range in intervals by using parameters -f
, -d
and -t
.
Also we can set -m
to display latency output machine wise. Default display is histogram name wise.
In the below example we look at the latency of writes_master.
Admin> show latency like writes_master
~~~~~~~~~~~~~~~writes_master Latency (2018-03-02 08:28:09 UTC)~~~~~~~~~~~~~~~~
Node Time Ops/Sec %>1Ms %>8Ms %>64Ms
. Span . . . .
u10.aerospike.local:3000 08:27:58->08:28:08 2044.7 1.09 0.0 0.0
u12.aerospike.local:3000 08:27:58->08:28:08 2012.6 0.77 0.0 0.0
u13.aerospike.local:3000 08:27:58->08:28:08 1968.9 1.03 0.0 0.0
Number of rows: 3
The show latency
command supports for
modifier to display namespace wise latency. It also shows aggregate latency for input namespaces (filtered by for
) in blue. This feature works only for server version 3.9 and above.
The following example shows query latency for test and bar namespaces which got filtered by for
input (te and b).
The third row without namespace name is aggregate latency for test and bar namespace. Though not visible here, this row has blue font.
Admin> show latency for te b like query
~~~~~~~~~~~~~~~~~~~~~~~~~~query Latency (2018-03-02 08:28:09 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~
Node Namespace Time Ops/Sec %>1Ms %>8Ms %>64Ms
. . Span . . . .
1.0.0.127.in-addr.arpa:3000 bar 08:27:58->08:28:08 295.2 2.0 0.0 0.0
1.0.0.127.in-addr.arpa:3000 test 08:27:58->08:28:08 100.0 2.7 0.0 0.0
1.0.0.127.in-addr.arpa:3000 08:27:58->08:28:08 395.2 2.18 0.0 0.0
Number of rows: 3
Mapping
The show mapping
command displays mapping from IP to Node-ID and Node-ID to IPs.
By default it displays both maps, but
sub-commands ip, and node will confine the output to
a single map. Also we can use like
modifier to input substring of expected IP or Node-ID.
The following example shows IP to Node-ID mapping for IP which has substring either "231" or "233".
Admin> show mapping ip like 231 233
~IP to NODE-ID Mappings (2020-12-18 00:49:14 UTC)~
IP| Node ID
172.16.245.231:3000|BB9010016AE4202
172.16.245.233:3000|BB9020016AE4202
Number of rows: 2
The following example shows Node-ID to IPs mapping for Node-ID which has substring "BB". It displays all available endpoints for Node.
Admin> show mapping node like BB
~NODE-ID to IPs Mappings (2020-12-18 00:50:43 UTC)~
Node ID| IP
BB9010016AE4202| 10.0.0.1:3000
Number of rows: 1
Pmap
(Introduced: 0.1.12)
The show pmap
command displays partition map analysis of the Aerospike cluster.
The following example shows the output of the show pmap
command.
Primary Partitions: Total number of primary partitions for a specific namespace on that node.
Secondary Partitions: Total number of secondary partitions for a specific namespace on that node.
Unavailable Partitions: The number of partitions that are unavailable when roster nodes are missing.
Dead Partitions: The number of partitions that are unavailable when all roster nodes are present.
Admin> show pmap
~~~~~~~~~~~~Partition Map Analysis (2020-12-18 01:12:36 UTC)~~~~~~~~~~~
Namespace| Node| Cluster Key|~~~~~~~~~~~~Partitions~~~~~~~~~~~~
| | |Primary|Secondary|Unavailable|Dead
bar |10.0.0.1:3000|33718FC58CD6| 791| 799| 0| 0
bar |10.0.0.2:3000|33718FC58CD6| 868| 822| 0| 0
bar |10.0.0.3:3000|33718FC58CD6| 839| 862| 0| 0
bar |10.0.0.4:3000|33718FC58CD6| 800| 780| 0| 0
bar |10.0.0.6:3000|33718FC58CD6| 798| 833| 0| 0
bar | | | 4096| 4096| 0| 0
test |10.0.0.1:3000|33718FC58CD6| 791| 799| 0| 0
test |10.0.0.2:3000|33718FC58CD6| 868| 822| 0| 0
test |10.0.0.3:3000|33718FC58CD6| 839| 862| 0| 0
test |10.0.0.4:3000|33718FC58CD6| 800| 780| 0| 0
test |10.0.0.6:3000|33718FC58CD6| 798| 833| 0| 0
test | | | 4096| 4096| 0| 0
Number of rows: 10
Racks
(Introduced: 2.5.0)
The show racks
command displays a namespaces' rack-ids and the nodes assigned to each.
This is particularly useful in rack-aware configurations.
Admin> show racks
~~~~~~~~~~~~~~~~Racks (2021-10-21 20:33:28 UTC)~~~~~~~~~~~~~~~~~
Namespace|Rack| Nodes
| ID|
bar |4 |BB9040016AE4202, BB9020016AE4202, BB9010016AE4202
test |2 |BB9040016AE4202, BB9010016AE4202
Number of rows: 2
Roles
(Introduced: 2.1.0)
Access Control Permissions: user-admin
The show roles
command displays roles along with associated privileges, allowlists, and quotas as
returned by the principal node. show roles
can be used in conjunction with manage acl roles
to perform role administration.
Admin+> show roles
~~~~~~~~~~~~~~~~~~~~~~~~~~~Roles (2021-04-21 22:28:01 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Role| Privileges|Allowlist|~~~Quotas~~
| | | Read|Write
read | read| --|-- |--
read-write | read-write| --|-- |--
read-write-udf| read-write-udf| --|-- |--
reader | read| 1.1.1.1|10000|1
root | user-admin, sys-admin, data-admin, read-write| --|-- |--
superuser |user-admin, sys-admin, data-admin, read-write-udf| --|-- |--
sys-admin | sys-admin| --|-- |--
user-admin | user-admin| --|-- |--
write | write| --|-- |--
writer | read-write| 2.2.2.2|1 |10000
Number of rows: 10
Roster
(Introduced: 2.5.0)
The show roster
command displays the current and pending roster as well as the observed nodes.
To make viewing easier, run the pager on
command first. show roster
can be used in conjunction with manage roster
to modify the pending roster. To filter output based on namespace use the for
modifier. To filter output based on
node use the with
modifier. To display any differences between values in any given column use the diff
modifier.
Admin> show roster
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Roster (2021-10-21 20:12:29 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Node| Node ID|Namespace| Current Roster| Pending Roster| Observed Nodes
10.0.0.1:3000|BB9010016AE4202 |bar |BB9040016AE4202@4, BB9020016AE4202@4, BB9010016AE4202@4|BB9040016AE4202@4, BB9020016AE4202@4, BB9010016AE4202@4|BB9040016AE4202@4, BB9020016AE4202@4, BB9010016AE4202@4
10.0.0.2:3000|BB9020016AE4202 |bar |BB9040016AE4202@4, BB9020016AE4202@4, BB9010016AE4202@4|BB9040016AE4202@4, BB9020016AE4202@4, BB9010016AE4202@4|BB9040016AE4202@4, BB9020016AE4202@4, BB9010016AE4202@4
10.0.0.4:3000|*BB9040016AE4202|bar |BB9040016AE4202@4, BB9020016AE4202@4, BB9010016AE4202@4|BB9040016AE4202@4, BB9020016AE4202@4, BB9010016AE4202@4|BB9040016AE4202@4, BB9020016AE4202@4, BB9010016AE4202@4
10.0.0.1:3000|BB9010016AE4202 |test |BB9040016AE4202@2, BB9020016AE4202@2, BB9010016AE4202@2|BB9040016AE4202@2, BB9020016AE4202@2, BB9010016AE4202@2|BB9040016AE4202@2, BB9020016AE4202@2, BB9010016AE4202@2
10.0.0.2:3000|BB9020016AE4202 |test |BB9040016AE4202@2, BB9020016AE4202@2, BB9010016AE4202@2|BB9040016AE4202@2, BB9020016AE4202@2, BB9010016AE4202@2|BB9040016AE4202@2, BB9020016AE4202@2, BB9010016AE4202@2
10.0.0.4:3000|*BB9040016AE4202|test |BB9040016AE4202@2, BB9020016AE4202@2, BB9010016AE4202@2|BB9040016AE4202@2, BB9020016AE4202@2, BB9010016AE4202@2|BB9040016AE4202@2, BB9020016AE4202@2, BB9010016AE4202@2
Number of rows: 6
Secondary Indexes
(Introduced: 2.1.0)
The show sindex
command displays secondary indexes and associated static metadata as
returned by the principal node. show sindex
can be used in conjunction with manage sindex
to perform sindex management.
Admin+> show sindex
~~~~~~Secondary Indexes (2021-01-22 23:04:49 UTC)~~~~~~
Index Name|Namespace| Set| Bin| Bin| Index|State
| | | | Type| Type|
name-sindex|bar |NULL|name|STRING |NONE |RW
age-index |test |NULL| age|NUMERIC|MAPVALUES|RW
job-index |test |NULL| age|STRING |MAPVALUES|RW
Number of rows: 3
Statistics
The show statistics
command displays all server statistics from several
server components. By default it returns statistics for bin, set, service, and namespace but the
sub-commands bins, namespace, service, sets, sindex, and xdr confine the output to
a single context. See below for details and additional subcommands for show statistics xdr
.
Also, we can set -t
parameter to get an extra aggregate column for total across columns. Total column displays
sum of statistics with numeric values.
The example below displays service level statistics while filtering for metric containing the token "batch" and displaying a total column:
Admin> show statistics service like batch -t
~~~~~~~~~~~~~~~~~~Service Statistics (2020-12-18 01:33:36 UTC)~~~~~~~~~~~~~~~~~~~~~
Node |10.0.0.1:3000|10.0.0.2:3000|10.0.0.3:3000|
batch_index_complete |0 |0 |0 |0
batch_index_created_buffers |0 |0 |0 |0
batch_index_delay |0 |0 |0 |0
batch_index_destroyed_buffers |0 |0 |0 |0
batch_index_error |0 |0 |0 |0
batch_index_huge_buffers |0 |0 |0 |0
batch_index_initiate |0 |0 |0 |0
batch_index_proto_compression_ratio |1.0 |1.0 |1.0 |
batch_index_proto_uncompressed_pct |0.0 |0.0 |0.0 |0.0
batch_index_queue |0:0,0:0 |0:0,0:0 |0:0,0:0 |
batch_index_timeout |0 |0 |0 |0
batch_index_unused_buffers |0 |0 |0 |0
early_tsvc_batch_sub_error |0 |0 |0 |0
early_tsvc_from_proxy_batch_sub_error|0 |0 |0 |0
Number of rows: 15
For large clusters we can use the -flip
option to flip the output for readability.
Example Output:
Admin> show statistics namespace for test like partition-tree -flip
~test Namespace Statistics (2020-12-18 01:58:32 UTC)~
Node|partition-tree-sprigs
10.0.0.1:3000| 256
10.0.0.2:3000| 256
10.0.0.3:3000| 256
10.0.0.4:3000| 256
10.0.0.6:3000| 256
Number of rows: 5
XDR Statistics
The show statistics xdr
command displays all the available statistics information related to XDR. By default,
this command displays XDR datacenter statistics and XDR namespace statistics. You may also provide one of the sub-commands: dc and namespace to limit the output to a specific context.
show statstics xdr
subcommands dc and namespace were added in asadm Tools package 8.2 (asadm 2.13).
The show statistics xdr dc
command displays a new table for each configured datacenter. The command also supports the for
modifier to filter by datacenter.
Admin> show statistics xdr dc for dc2 like retry
~~~~~~~~~XDR dc2 DC Statistics (2023-02-16 23:56:28 UTC)~~~~~~~~~~
Node |172.17.0.4:3000|172.17.0.5:3000|172.17.0.6:3000
retry_conn_reset|0 |0 |0
retry_dest |0 |0 |0
retry_no_node |0 |0 |0
Number of rows: 4
The show statistics xdr namespace
command displays a new table for each configured xdr namespace. The command also supports the for
modifier to filter first by namespace and then by datacenter.
Admin> show statistics xdr namespace like retry
~~~~~~~~~~~~~~~~~~~~~~XDR test Namespace Statistics (2023-02-16 23:57:32 UTC)~~~~~~~~~~~~~~~~~~~~~~~
Datacenter |dc1 |dc1 |dc1 |dc2 |dc2 |dc2
Node |10.0.0.4:3000|10.0.0.5:3000|10.0.0.6:3000|10.0.0.4:3000|10.0.0.5:3000|10.0.0.6:3000
retry_conn_reset|0 |0 |0 |0 |0 |0
retry_dest |0 |0 |0 |0 |0 |0
retry_no_node |0 |0 |0 |0 |0 |0
Number of rows: 5
~~~~~~~~~~~~~~~~~~~~~~XDR bar Namespace Statistics (2023-02-16 23:57:32 UTC)~~~~~~~~~~~~~~~~~~~~~~~
Datacenter |dc1 |dc1 |dc1 |dc2 |dc2 |dc2
Node |10.0.0.4:3000|10.0.0.5:3000|10.0.0.6:3000|10.0.0.4:3000|10.0.0.5:3000|10.0.0.6:3000
retry_conn_reset|0 |0 |0 |0 |0 |0
retry_dest |0 |0 |0 |0 |0 |0
retry_no_node |0 |0 |0 |0 |0 |0
Number of rows: 5
To instead display a new table for each configured datacenter use the --by-dc
flag.
Admin> show statistics xdr namespace like retry --by-dc
~~~~~~~~~~~~~~~~~~~~~~XDR dc1 Namespace Statistics (2023-02-16 23:57:32 UTC)~~~~~~~~~~~~~~~~~~~~~~~
Namespace |test |test |test |bar |bar |bar
Node |10.0.0.4:3000|10.0.0.5:3000|10.0.0.6:3000|10.0.0.4:3000|10.0.0.5:3000|10.0.0.6:3000
retry_conn_reset|0 |0 |0 |0 |0 |0
retry_dest |0 |0 |0 |0 |0 |0
retry_no_node |0 |0 |0 |0 |0 |0
Number of rows: 5
~~~~~~~~~~~~~~~~~~~~~~XDR dc2 Namespace Statistics (2023-02-16 23:57:32 UTC)~~~~~~~~~~~~~~~~~~~~~~~
Namespace |test |test |test |bar |bar |bar
Node |10.0.0.4:3000|10.0.0.5:3000|10.0.0.6:3000|10.0.0.4:3000|10.0.0.5:3000|10.0.0.6:3000
retry_conn_reset|0 |0 |0 |0 |0 |0
retry_dest |0 |0 |0 |0 |0 |0
retry_no_node |0 |0 |0 |0 |0 |0
Number of rows: 5
Stop Writes
(Introduced: 2.15.0)
The show stop-writes
command in the asadm tool provides comprehensive information about stop-writes configuration parameters, metrics, and their associated context, i.e. namespace and test. This command is particularly helpful in determining the proximity to reaching the stop-writes threshold at different levels: service context (global), namespace context, or set context. It also assists in identifying the reasons for being in the stop-writes state.
show stop-writes
displays the following table which is ordered based on the proximity to breaching the configured stop-writes threshold. For instance, the stop-writes-count
configuration for the namespace test and set testset is closest to reaching the limit of 10,000 records and is positioned at the bottom of the table. This arrangement helps in effectively addressing the issue by providing the relevant configuration details and the metric that might potentially exceed the threshold. Additionally, the table presents the current proximity to the configured threshold, actual usage, and the threshold itself, offering a clear understanding of the current status. A --
threshold means none is configured.
Admin> show stop-writes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Stop Writes (2023-05-23 23:01:01 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Show all stop writes - add 'for <namespace> [<set>]' for a shorter list.
Config|Namespace| Set| Node|Stop-Writes| Metric| Usage%| Usage|Threshold
stop-writes-size |test |testset|172.17.0.5:3000|False |memory_data_bytes | --|123.005 KB| --
stop-writes-size |test |testset|172.17.0.4:3000|False |memory_data_bytes | --|123.373 KB| --
stop-writes-size |test |testset|172.17.0.3:3000|False |memory_data_bytes | --|123.246 KB| --
-- |test |-- |172.17.0.5:3000|False |cluster_clock_skew_ms| --| 00:00:00| --
-- |bar |-- |172.17.0.5:3000|False |cluster_clock_skew_ms| --| 00:00:00| --
-- |test |-- |172.17.0.4:3000|False |cluster_clock_skew_ms| --| 00:00:00| --
-- |bar |-- |172.17.0.4:3000|False |cluster_clock_skew_ms| --| 00:00:00| --
-- |test |-- |172.17.0.3:3000|False |cluster_clock_skew_ms| --| 00:00:00| --
-- |bar |-- |172.17.0.3:3000|False |cluster_clock_skew_ms| --| 00:00:00| --
stop-writes-pct |bar |-- |172.17.0.3:3000|False |memory_used_bytes | 0.0 %| 0.000 B | 3.600 GB
stop-writes-pct |bar |-- |172.17.0.4:3000|False |memory_used_bytes | 0.0 %| 0.000 B | 3.600 GB
stop-writes-pct |bar |-- |172.17.0.5:3000|False |memory_used_bytes | 0.0 %| 0.000 B | 3.600 GB
stop-writes-pct |test |-- |172.17.0.5:3000|False |memory_used_bytes | 1.74 %|728.567 KB|40.960 MB
stop-writes-pct |test |-- |172.17.0.3:3000|False |memory_used_bytes | 1.74 %|729.996 KB|40.960 MB
stop-writes-pct |test |-- |172.17.0.4:3000|False |memory_used_bytes | 1.74 %|730.748 KB|40.960 MB
stop-writes-sys-memory-pct|bar |-- |172.17.0.3:3000|False |system_free_mem_pct |28.89 %| 26.0 %| 90.0 %
stop-writes-sys-memory-pct|test |-- |172.17.0.3:3000|False |system_free_mem_pct |28.89 %| 26.0 %| 90.0 %
stop-writes-sys-memory-pct|bar |-- |172.17.0.4:3000|False |system_free_mem_pct |28.89 %| 26.0 %| 90.0 %
stop-writes-sys-memory-pct|test |-- |172.17.0.4:3000|False |system_free_mem_pct |28.89 %| 26.0 %| 90.0 %
stop-writes-sys-memory-pct|bar |-- |172.17.0.5:3000|False |system_free_mem_pct |28.89 %| 26.0 %| 90.0 %
stop-writes-sys-memory-pct|test |-- |172.17.0.5:3000|False |system_free_mem_pct |28.89 %| 26.0 %| 90.0 %
stop-writes-count |test |testset|172.17.0.5:3000|False |objects |96.89 %| 9.689 K| 10.000 K
stop-writes-count |test |testset|172.17.0.3:3000|False |objects |97.08 %| 9.708 K| 10.000 K
stop-writes-count |test |testset|172.17.0.4:3000|False |objects |97.18 %| 9.718 K| 10.000 K
Number of rows: 24
User Defined Functions
(Introduced: 2.1.0)
The show udfs
command displays user-defined function (udf) modules as
returned by the principal node. show udfs
can be used in conjunction with
manage udfs
to perform udf management.
Admin+> show udfs
~~~~~~~~UDF Modules (2021-01-22 23:12:29 UTC)~~~~~~~~~
Filename| Hash|Type
abc.123 |dceaf7f1acddf1d6e12a1752d499d80cfadfc24b|LUA
bar.lua |591d2536acb21a329040beabfd9bfaf110d35c18|LUA
foo.lua |f6eaf2b22d8b29b3597ef1ad9113d0907425ecd0|LUA
Users
(Introduced: 2.1.0)
Access Control Permissions: user-admin
The show users [user]
command displays users along with their associated roles as
returned by the principal node. Optionally, you can display a single user by providing a username as the first argument.
show users
can be used in conjunction with manage acl users
to perform user administration.
User runtime statistics were moved to the show users statistics
command in Tools package 8.4.0 (Asadm 2.15.0).
In asadm 2.2.0 to 2.14.0 (inclusive), runtime statistics were located in the show users
table if quotas were enabled but only accounted for a single node, the principal.
Admin+> show users
~~Users (2023-05-24 20:52:11 UTC)~~
To see users statistics run 'show
users statistics'
User| Roles|~Read~|~Write~
| | Quota| Quota
admin |user-admin|0 |0
reader | reader|10000 |1
root | root|0 |0
superuser| superuser|0 |0
writer | writer|1 |10000
Number of rows: 5
Users Statistics
(Introduced: 2.15.0)
Access Control Permissions: user-admin
The show users statistics [user]
command displays users, number of user connections, and quota related metrics across all nodes in the cluster.
You can use this to see the live activity of your users and find out which users might be close to or exceeding their assigned quotas. In addition
to viewing users per node, there is also an additional aggregate line to display usage for the entire cluster. Optionally, you can retrieve a single user
by providing a username as the first argument. show users statistics
can be used in conjunction with show users
and manage acl users
to perform user administration.
Admin+> show users stat
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Users Statistics (2023-05-24 21:49:04 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
User| Node|Connections|~~~~~~~~~~~~~~~~~~~~Read~~~~~~~~~~~~~~~~~~~|~~~~~~~~~~~~~~~~~~~Write~~~~~~~~~~~~~~~~~~~
| | | Quota|Usage%| Single| PI/SI| PI/SI| Quota|Usage%| Single| PI/SI| PI/SI
| | | | | Record| Query| Query| | | Record| Query| Query
| | | | | TPS|Limited|Limitless| | | TPS|Limited|Limitless
| | | | | | RPS| | | | | RPS|
admin |172.17.0.3:3000| 2.000 | 0.000 | --| 0.000 |0.000 | 0.000 | 0.000 | --| 0.000 |0.000 | 0.000
admin |172.17.0.4:3000| 2.000 | 0.000 | --| 0.000 |0.000 | 0.000 | 0.000 | --| 0.000 |0.000 | 0.000
admin |172.17.0.5:3000| 2.000 | 0.000 | --| 0.000 |0.000 | 0.000 | 0.000 | --| 0.000 |0.000 | 0.000
admin | | 6.000 | 0.000 | 0.0 %| 0.000 |0.000 | 0.000 | 0.000 | 0.0 %| 0.000 |0.000 | 0.000
reader |172.17.0.3:3000| 13.000 |10.000 K|5.93 %|593.000 |0.000 | 0.000 | 1.000 | 0.0 %| 0.000 |0.000 | 0.000
reader |172.17.0.4:3000| 13.000 |10.000 K|5.15 %|515.000 |0.000 | 0.000 | 1.000 | 0.0 %| 0.000 |0.000 | 0.000
reader |172.17.0.5:3000| 13.000 |10.000 K| 4.7 %|470.000 |0.000 | 0.000 | 1.000 | 0.0 %| 0.000 |0.000 | 0.000
reader | | 39.000 |30.000 K|5.26 %| 1.578 K|0.000 | 0.000 | 3.000 | 0.0 %| 0.000 |0.000 | 0.000
root |172.17.0.3:3000| --| 0.000 | --| 0.000 |0.000 | 0.000 | 0.000 | --| 0.000 |0.000 | 0.000
root |172.17.0.4:3000| --| 0.000 | --| 0.000 |0.000 | 0.000 | 0.000 | --| 0.000 |0.000 | 0.000
root |172.17.0.5:3000| --| 0.000 | --| 0.000 |0.000 | 0.000 | 0.000 | --| 0.000 |0.000 | 0.000
root | | --| 0.000 | 0.0 %| 0.000 |0.000 | 0.000 | 0.000 | 0.0 %| 0.000 |0.000 | 0.000
superuser|172.17.0.3:3000| 14.000 | 0.000 | --|263.000 |0.000 | 0.000 | 0.000 | --|271.000 |0.000 | 0.000
superuser|172.17.0.4:3000| 12.000 | 0.000 | --|225.000 |0.000 | 0.000 | 0.000 | --|257.000 |0.000 | 0.000
superuser|172.17.0.5:3000| 14.000 | 0.000 | --|227.000 |0.000 | 0.000 | 0.000 | --|226.000 |0.000 | 0.000
superuser| | 40.000 | 0.000 | 0.0 %|715.000 |0.000 | 0.000 | 0.000 | 0.0 %|754.000 |0.000 | 0.000
writer |172.17.0.3:3000| 14.000 | 1.000 | 0.0 %| 0.000 |0.000 | 0.000 |10.000 K|5.29 %|529.000 |0.000 | 0.000
writer |172.17.0.4:3000| 12.000 | 1.000 | 0.0 %| 0.000 |0.000 | 0.000 |10.000 K|4.56 %|456.000 |0.000 | 0.000
writer |172.17.0.5:3000| 14.000 | 1.000 | 0.0 %| 0.000 |0.000 | 0.000 |10.000 K|4.45 %|445.000 |0.000 | 0.000
writer | | 40.000 | 3.000 | 0.0 %| 0.000 |0.000 | 0.000 |30.000 K|4.77 %| 1.430 K|0.000 | 0.000
Number of rows: 15
Manage
(Introduced: 2.1.0)
The manage
commands provide a convenient way to administer your access control
list (acl), add and remove user defined functions (udfs), create and delete
secondary indexes (sindex), and dynamically configure your cluster.
To access the manage
commands the user must enter a privileged
mode by typing enable [--warn]
. See enable for more information.
Unlike most other commands, manage
commands require one or more arguments.
Additionally, each manage
command requires specific access rights. Please see
Configuring Access Control
and descriptions of manage
commands below.
Access Control List
(Introduced: 2.1.0)
The manage acl
commands allows for user and role management. User and role
management follow a similar syntax for many of the commands. The general syntax
is manage acl <operation> user|role <username>|<role-name> . . .
For example,
creating a user would be prefixed by manage acl create user <username>
while creating a
role would be prefixed by manage acl create role <role-name>
. The show users
and show roles
commands should be used in conjunction with manage acl
commands.
User Creation
(Introduced: 2.1.0)
Access Control Permissions: user-admin
The manage acl create user <username> [password <password>] [roles <role1> <role2> ...]
command allows the creation of new users and assigning
them roles. To keep a password out of command history asadm
prompts for a
password when the password
argument is not provided. For the rules for valid
passwords please see Local to Aerospike Passwords.
Assigning roles is done using the roles
keyword however, assigning roles to a
new user is not required.
In this example we create a user "Mr-Rogers" with role "Good-Neighbor" and because we do not provide a password, a prompt is provided.
Admin+> manage acl create user Mr-Rogers roles Good-Neighbor
Enter password for new user Mr-Rogers:
Successfully created user Mr-Rogers
Deleting a User
(Introduced: 2.1.0)
Access Control Permissions: user-admin
Use the command manage acl delete user <username>
to remove a user.
Admin+> manage acl delete user Thanos
Successfully deleted user Thanos
Setting a Users Password
(Introduced: 2.1.0)
Access Control Permissions: user-admin
In Tools package 7.1.1 (asadm 2.8) and earlier, asadm (formerly performed in aql
) limits the characters you can use when setting a password.
Valid passwords can contain alphanumeric characters and the symbols .*-:/_{}@
. White space is not supported.
The manage acl set-password user <username> [password <password>]
command allows a user-admin to change
the password of any user without knowing that user's current password.
Passwords that contain whitespace must be quoted. Double and single quotes must either
be escaped or be different from the enclosing quote.
To keep a password out of command history asadm
prompts for
a password when the password
argument is not provided.
Admin+> manage acl set-password user jesse
Enter new password for user jesse:
Successfully set password for user jesse
Changing a Users Password
(Introduced: 2.1.0)
Access Control Permissions: None
The manage acl change-password user <username> [old <old-password>] [new <new-password>]
command allows a user to change the
password of any other user as long as the user's current password is provided. To keep
both the old and new password out of command history asadm
prompts
for them when not provided.
Admin+> manage acl change-password user Kelly
Enter old password:
Enter new password:
Successfully changed password for user Kelly
Granting Roles to a User
(Introduced: 2.1.0)
Access Control Permissions: user-admin
The manage acl grant user <username> roles <role1> [<role2> [...]]
command adds one or more roles to an
existing user using the roles
keyword.
Admin+> manage acl grant user Kelly roles data-admin
Successfully granted roles to user Kelly
Revoking Roles from a User
(Introduced: 2.1.0)
Access Control Permissions: user-admin
The manage acl revoke user <username> roles <role1> [<role2> [...]]
command removes one or more roles from
an existing user using the roles
keyword.
Admin+> manage acl revoke user Kelly roles data-admin
Successfully revoked roles from user Kelly
Role Creation
(Introduced: 2.1.0)
(Quotas Introduced: 2.2.0)
Access Control Permissions: user-admin
The
create role <role-name> priv <privilege> [ns <namespace> [set <set>]] [allow <addr1> [<addr2> [...]]] [read <read-quota>] [write <write-quota>]
command allows the creation of new roles
and assigning them a privilege and allowlist. Assigning a privilege is required
and is done using the priv
keyword followed by a privilege. Some privileges can
also have namespace or set scopes which can be defined with the ns
and set
keywords.
Please see Privileges, permissions, and scopes
for more information. To assign an allowlist use
the allow
keyword followed by one or more addresses.
To assign a read quota and/or write quota use the read
and write
keywords.
In this example we create a role "devops" with the "read-write" privilege with a namespace scope of "test", set scope "testset", an allowlist of "10.0.0.1", read quota of 3000, and write quota of 4000.
Admin+> manage acl create role devops priv read-write ns test set testset allow 10.0.0.1 read 3000 write 4000
Successfully created role devops
Deleting a Role
(Introduced: 2.1.0)
Access Control Permissions: user-admin
The manage acl delete role <role-name>
command allows for the removal of a role.
Admin+> manage acl delete role devops
Successfully deleted role devops
Granting a Privilege to a Role
(Introduced: 2.1.0)
Access Control Permissions: user-admin
The manage acl grant role <role-name> priv <privilege> [ns <namespace> [set <set>]]>
command adds one or more privileges to a existing role. Some privileges can
also have namespace or set scopes which can be defined with the ns
and set
keywords.
Please see Privileges, permissions, and scopes
for more information.
Admin+> manage acl grant role superwoman priv write ns bar set testset
Successfully granted privilege to role superwoman
Revoking a Privilege from a Role
(Introduced: 2.1.0)
Access Control Permissions: user-admin
The manage acl revoke role <role-name> priv <privilege> [ns <namespace> [set <set>]]>
command removes a single privilege from a role. If the privilege has a namespace
scope the ns
argument is required. If the privilege has a set scope the ns
and set
arguments are required.
Admin+> manage acl revoke role superwoman priv data-admin ns test set testset
Successfully revoked privilege from role superwoman
Updating the Allowlist of a Role
(Introduced: 2.1.0)
Access Control Permissions: user-admin
The allowlist
command has two functions. It can overwrite the allowlist for a
role or it can clear an allowlist for a role. To overwrite the allowlist use
manage acl allowlist role <role-name> allow <addr1> [<addr2> [...]]
. To clear and allowlist
use manage acl allowlist role <role-name> clear
.
Overwriting allowlist:
Admin+> manage acl allowlist role superwoman allow 10.0.0.1 10.1.2.3
Successfully updated allowlist for role superwoman
Clearing allowlist:
Admin+> manage acl allowlist role superwoman clear
Successfully cleared allowlist from role superwoman
Updating the Quotas of a Role
(Introduced: 2.2.0)
The manage acl quotas role <role-name> [read <read-quota>]|[write <write-quota>]
command changes the read and/or write quota for a role using the read
and write
keywords. Either the read
or write
keyword must be provided. If
either the read
or write
keyword is not provided the respective quota will
not be changed. To remove a quota from a role set the value to 0
.
Admin+> manage acl quotas role superwoman read 6000 write 9000
Successfully set quotas for role superwoman.
Dynamic Configuration
(Introduced: 2.3.0)
The manage config
commands are used to edit configuration, create XDR datacenters,
add and remove XDR nodes, and add and remove XDR namespaces in the Aerospike cluster.
manage config
commands were designed to match the structure of
the aerospike.conf
file; by knowing the context of a configuration parameter one should
be able to issue the correct command. By default, manage config
commands affect
all nodes in the Aerospike cluster. To only run a command against a subset
of nodes use the with
modifier. To see which nodes a command will affect run
privileged mode with the --warn
flag. manage config
commands also come with robust tab
completion for contexts, sub-contexts, parameters, and values for Aerospike Database 4.0
and newer. For tab completion in the latest version of the Aerospike database use the latest
version of asadm. The show config
command should be used in conjunction with manage config
commands.
Changing Configuration Parameters
(Introduced: 2.3.0)
To change the value of a configuration parameter use the manage config <context> [<sub-context1> [<name1>] [<sub-context2> [<name2>] [...]]] param <parameter> to <value>
command. If a context or sub-context
is followed by a name (i.e. namespace, set, dc, etc.) in the aerospike.conf then the
<context>
or <subcontext>
must also be followed by a name.
Examples:
- Changing the service configuration:
manage config service param <parameter> to <value> [with node1 [node2 [...]]]
Admin+> manage config service param proto-fd-max to 1500 with 10*
~Set Service Param proto-fd-max to 1500~
Node|Response
10.0.0.1:3000|ok
10.0.0.2:3000|ok
10.0.0.3:3000|ok
10.0.0.4:3000|ok
10.0.0.5:3000|ok
Number of rows: 5
- Changing the logging configuration for
aerospike.log
file:
manage config logging file <log-file-name> param <parameter> to <value> [with node1 [node2 [...]]]
The param
keyword specifies the logging context
you would like to change while the to
keyword specifies the desired
severity level.
Admin+> manage config logging file /var/log/aerospike/aerospike.log param aggr to info with 10.0.0.1 10.0.0.2 10.0.0.3
~Set Logging Param aggr to info~
Node|Response
10.0.0.1:3000|ok
10.0.0.2:3000|ok
10.0.0.3:3000|ok
Number of rows: 3
- Changing the network heartbeat configuration:
manage config network <subcontext> param <parameter> to <value>
Admin+> manage config network heartbeat param interval to 1500 with 10.0.0.1*
~Set Network Param interval to 1500~
Node|Response
10.0.0.1:3000|ok
Number of rows: 1
- Changing the security configuration:
manage config security [<subcontext>] param <parameter> to <value>
Admin+> manage config security param privilege-refresh-period to 4500 with 10.0.0.1*
~Set Security Param privilege-refresh-period to 4500~
Node|Response
10.0.0.1:3000|ok
10.0.0.1:3001|ok
10.0.0.1:3002|ok
10.0.0.1:3003|ok
10.0.0.1:3004|ok
Number of rows: 5
- Changing configuration for namespace test:
manage config namespace <ns> param <parameter> to <value>
Admin+> manage config namespace test param allow-ttl-without-nsup to false
~Set Namespace Param allow-ttl-without-nsup to false~
Node|Response
10.0.0.1:3000|ok
10.0.0.2:3000|ok
10.0.0.3:3000|ok
10.0.0.4:3000|ok
10.0.0.5:3000|ok
Number of rows: 5
- Changing configuration for namespace test and set testset:
manage config namespace <ns> set <set> param <parameter> to <value>
Admin+> manage config namespace test set testset param disable-eviction to true
~Set Namespace Param disable-eviction to true~
Node|Response
10.0.0.1:3000|ok
10.0.0.2:3000|ok
10.0.0.3:3000|ok
10.0.0.4:3000|ok
10.0.0.5:3000|ok
Number of rows: 5
- Changing configuration for namespace test and subcontext storage-engine:
manage config namespace <ns> <subcontext> param <parameter> to <value>
Admin+> manage config namespace test storage-engine param min-avail-pct to 0 with 10.0.0.1:3000
~Set Namespace Param min-avail-pct to 0~
Node|Response
10.0.0.1:3000|ok
Number of rows: 1
- Changing XDR configuration:
manage config xdr param <parameter> to <value>
Admin+> manage config xdr param src-id to 1 with 10.0.0.5*
~Set XDR Param src-id to 1~
Node|Response
10.0.0.5:3000|ok
Number of rows: 1
- Changing configuration for XDR datacenter DC1:
manage config xdr dc <datacenter> param <parameter> to <value>
Admin+> manage config xdr dc DC1 param period-ms to 5 with 10.0.0.2 10.0.0.3
~Set XDR DC param period-ms to 5~
Node|Response
10.0.0.2:3000|ok
10.0.0.3:3000|ok
Number of rows: 2
- Changing namespace test configuration for XDR datacenter DC1's:
manage config xdr dc <datacenter> namespace <ns> param <parameter> to <value>
Admin+> manage config xdr dc DC1 namespace test param ignore-bin to age
~Set XDR Namespace Param ignore-bin to age~
Node|Response
10.0.0.1:3000|ok
10.0.0.2:3000|ok
10.0.0.3:3000|ok
10.0.0.4:3000|ok
10.0.0.5:3000|ok
Number of rows: 5
Creating an XDR datacenter
(Introduced: 2.3.0)
The manage xdr create dc <dc>
command is used to dynamically create a new XDR
datacenter.
Admin+> manage config xdr create dc DC3 with 10.0.0.4:3000
~~~Create XDR DC DC3~~
Node|Response
10.0.0.4:3000|ok
Number of rows: 1
Removing an XDR datacenter
(Introduced: 2.3.0)
The manage xdr delete dc <dc>
command is used to dynamically delete a XDR
datacenter.
Admin+> manage config xdr delete dc DC3 with 10.0.0.4:3000
~~~Delete XDR DC DC3~~
Node|Response
10.0.0.4:3000|ok
Number of rows: 1
Add a node to an XDR datacenter
(Introduced: 2.3.0)
The manage xdr dc <dc> add node <node:port>
command is used to dynamically add a
node to an XDR datacenter.
Admin+> manage config xdr dc DC3 add node 1.1.1.1:3000 with 10.0.0.4:3000
~Add XDR Node 1.1.1.1:3000 to DC DC3~
Node|Response
10.0.0.4:3000|ok
Number of rows: 1
Remove a node from an XDR datacenter
(Introduced: 2.3.0)
The manage xdr dc <dc> remove node <node:port>
command is used to dynamically
remove a node from a XDR datacenter.
Admin+> manage config xdr dc DC3 add node 1.1.1.1:3000 with 10.0.0.4:3000
~Remove XDR Node 1.1.1.1:3000 from DC DC3~
Node|Response
10.0.0.4:3000|ok
Number of rows: 1
Add a namespace to an XDR datacenter
(Introduced: 2.3.0)
The manage xdr dc <dc> add namespace <node:port>
command is used to dynamically add a
namespace to an XDR datacenter.
Admin+> manage config xdr dc DC3 add namespace test with 10.0.0.4:3000
~Add XDR namespace test to DC DC3~
Node|Response
10.0.0.4:3000|ok
Number of rows: 1
Remove a namespace from an XDR datacenter
(Introduced: 2.3.0)
The manage xdr dc <dc> remove namespace <ns>
command is used to dynamically remove a
namespace from an XDR datacenter.
Admin+> manage config xdr dc DC3 remove namespace test with 10.0.0.4:3000
~Remove XDR Namespace test from DC DC3~
Node|Response
10.0.0.4:3000|ok
Number of rows: 1
Jobs
(Introduced: 2.5.0)
The manage jobs
command aborts jobs running on the Aerospike cluster.
The show jobs
command should be used in conjunction with manage jobs
commands.
Killing Jobs Using Transaction IDs
(Introduced: 2.5.0)
Access Control Permissions: data-admin
The manage jobs kill trids <trid1> [<trid2> [...]]
command kills jobs matching
the provided trids. The command will find the appropriate node and module and send the request.
In this example we kill two jobs. The first is a scan on node 10.0.0.1 and the second is a query on node 10.0.0.2.
Admin+> manage jobs kill trids 1343444200604843206 9156474088110606100
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Kill Jobs (2021-10-20 23:57:22 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Node| Transaction ID|Namespace|Module| Type| Response
10.0.0.1:3000|9156474088110606100| bar|scan |basic|ok
10.0.0.2:3000|1343444200604843206| bar|query |basic|Failed to kill job : job not active.
Number of rows: 1
(Introduced: 2.5.0)
Killing All Jobs
The manage jobs kill all
command kills all jobs from the specified
module.
Killing All Query Jobs
(Introduced: 2.7.0)
Access Control Permissions: data-admin
The manage jobs kill all queries
command kills all query jobs.
Note: Scans and queries were unified in server v. 6.0 and after.
Admin+> manage jobs kill all queries
~~~~~~~~~~~~~~~~~~Kill Jobs~~~~~~~~~~~~~~~~~~~
Node| Response
10.0.0.1:3000|ok - number of queries killed: 4
10.0.0.2:3000|ok - number of queries killed: 4
10.0.0.3:3000|ok - number of queries killed: 3
Number of rows: 3
Killing All Scan Jobs
(Introduced: 2.5.0)
Access Control Permissions: data-admin
The manage jobs kill all scans
command kills all scan jobs.
Note: Scans and queries were unified in server v. 6.0 and after.
Admin+> manage jobs kill all scans
~~~~~~~~~~~~~~~~~Kill Jobs~~~~~~~~~~~~~~~~~~
Node| Response
10.0.0.1:3000|ok - number of scans killed: 4
10.0.0.2:3000|ok - number of scans killed: 4
Number of rows: 2
Truncation
(Introduced: 2.3.0)
The manage truncate
command is used to truncate and undo truncation for a namespace or namespace set in the Aerospike cluster. The command only sends requests to the principal node.
Truncating a Namespace or Set
(Introduced: 2.3.0)
Access Control Permissions: data-admin, write
The manage truncate ns <ns> [set <set>] [before <iso-8601-or-unix-epoch> iso-8601|unix-epoch]
command is used to delete records in the given namespace or namespace set. The deletes are durable
and preserve record deletions in the Enterprise Edition only. See
truncate-namespace and
truncate for more information.
If the before
modifier is provided the command deletes every record in the given namespace/set whose last update time
("lut") is older than the given time. If the before
modifier is not provided the current time is
used. The before
modifier accepts iso-8601 formatted or unix-epoch datetime followed by the
literal iso-8601
or unix-epoch
respectively. A unix-epoch
can be in seconds (1622054620
),
milliseconds (1622054620.mmm
), microseconds (1622054620.mmmuuu
), or nanoseconds
(1622054620.mmmuuunnn
). The --warn
flag is on by default because of the importance of this command.
If you would like to disable the warning use the --no-warn
flag.
In this example we truncate records in the namespace test with lut before May 5th 2021 at 6:43:40 PM UTC
Admin> info namespace object
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Namespace Object Information (2021-05-26 20:25:52 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Namespace| Node|Rack| Repl| Total|~~~~~~~~~~Objects~~~~~~~~~~~|~~~~~~~~~Tombstones~~~~~~~~|~~~~Pending~~~~
| | ID|Factor| Records| Master| Prole|Non-Replica| Master| Prole|Non-Replica|~~~~Migrates~~~
| | | | | | | | | | | Tx| Rx
bar |ubuntu:3000| 0| 1| 0.000 | 0.000 |0.000 | 0.000 |0.000 |0.000 | 0.000 |0.000 |0.000
bar | | | | 0.000 | 0.000 |0.000 | 0.000 |0.000 |0.000 | 0.000 |0.000 |0.000
test |ubuntu:3000| 0| 1|98.297 K|98.297 K|0.000 | 0.000 |0.000 |0.000 | 0.000 |0.000 |0.000
test | | | |98.297 K|98.297 K|0.000 | 0.000 |0.000 |0.000 | 0.000 |0.000 |0.000
Number of rows: 2
Admin> enable --warn
Admin+> manage truncate ns test before 2021-05-26T13:24:40-07:00 iso-8601
You are about to truncate up to 98297 records from namespace test with LUT before 13:24:40.000000 UTC-07:00 on May 26, 2021
Confirm that you want to proceed by typing x927c0, or cancel by typing anything else.
x927c0
Successfully started truncation for namespace test
Admin+> info namespace object
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Namespace Object Information (2021-05-26 20:26:35 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Namespace| Node|Rack| Repl| Total|~~~~~~~~~~Objects~~~~~~~~~~|~~~~~~~~~Tombstones~~~~~~~~|~~~~Pending~~~~
| | ID|Factor|Records| Master| Prole|Non-Replica| Master| Prole|Non-Replica|~~~~Migrates~~~
| | | | | | | | | | | Tx| Rx
bar |ubuntu:3000| 0| 1|0.000 |0.000 |0.000 | 0.000 |0.000 |0.000 | 0.000 |0.000 |0.000
bar | | | |0.000 |0.000 |0.000 | 0.000 |0.000 |0.000 | 0.000 |0.000 |0.000
test |ubuntu:3000| 0| 1|0.000 |0.000 |0.000 | 0.000 |0.000 |0.000 | 0.000 |0.000 |0.000
test | | | |0.000 |0.000 |0.000 | 0.000 |0.000 |0.000 | 0.000 |0.000 |0.000
Number of rows: 2
Undo Truncation for a Namespace or Set
(Introduced: 2.3.0)
Access Control Permissions: data-admin, write
The manage truncate undo ns <ns> [set <set>]
command is used to undo a previous truncate event. It operated by
removing the associated System Meta Data (SMD) file entry and allows some previously truncated records to be
resurrected on the next cold restart. This only works for records that have not had their persisted storage
block overwritten. See truncate-namespace-undo and
truncate-undo for more information.
Admin+> manage truncate ns test undo
Successfully triggered undoing truncation for namespace test on next cold restart
Quiesce
(Introduced: 2.3.0)
The manage quiesce
command is used to quiesce and revert the effects of a quiesce for a node in the Aerospike cluster.
Quiescing a Node
(Introduced: 2.3.0)
Access Control Permissions: sys-admin
The manage quiesce with node1 [node2 [...]]
command is used to stop a node from participating
as a replica after the next recluster event. See quiesce for more information.
Admin+> manage quiesce with 192.168.173.203
~~~~~~~~Quiesce Nodes~~~~~~~~
Node|Response
192.168.173.203:3000|ok
Number of rows: 1
Run "manage recluster" for your changes to take affect.
Reverse Effects of Quiesce for a Node
(Introduced: 2.3.0)
Access Control Permissions: sys-admin
The manage quiesce undo with node1 [node2 [...]]
command is used to revert the effect of a quiesce on the next recluster event.
See quiesce-undo for more information.
Admin+> manage quiesce undo with 192.168.173.203
~~~~Undo Quiesce for Nodes~~~
Node|Response
192.168.173.203:3000|ok
Number of rows: 1
Run "manage recluster" for your changes to take affect.
Recluster
(Introduced: 2.3.0)
Access Control Permissions: sys-admin
The manage recluster
command is used to force the cluster to advance and rebalance.
See recluster for more information.
Admin+> manage recluster
Successfully started recluster
Revive
(Introduced: 2.5.0)
Access Control Permissions: sys-admin
The manage revive
command is used to revive dead partitions in a namespace running in strong
consistency mode.
Admin+> manage revive ns test
~Revive Namespace Partitions~
Node|Response
localhost:3000|ok
Number of rows: 1
Run "manage recluster" for your changes to take affect.
Roster
(Introduced: 2.5.0)
The manage roster
commands are used to modify the pending roster. To commit the
pending roster to the current roster a recluster event must occur. To manually
trigger a recluster event use the manage recluster
command. Commands that modify the
roster are only sent to the principal node. The show roster
command should be used in conjunction with manage roster
commands.
Setting the Pending Roster to the Observed Nodes
(Introduced: 2.5.0)
Access Control Permissions: sys-admin
The manage roster stage observed ns <ns>
command assigns the nodes
and configured rack-ids to the pending roster. This will help you quickly initialize
a strong consistency cluster.
Admin+> manage roster stage observed ns test
You are about to set the pending-roster for namespace test to: BB9040016AE4202@1, BB9020016AE4202@2, BB9010016AE4202@3
Confirm that you want to proceed by typing x5e360, or cancel by typing anything else.
x5e360
Pending roster now contains observed nodes.
Run "manage recluster" for your changes to take affect.
Setting the Pending Roster to a List of Nodes
(Introduced: 2.5.0)
Access Control Permissions: sys-admin
The manage roster stage nodes <node1[@rack1]> [<node2[@rack2]> [...]] ns <ns>
command allows you
to overwrite the pending roster with any list of nodes. The --warn
flag is on by default because of the importance of this command.
If you would like to disable the warning use the --no-warn
flag.
Admin+> manage roster stage nodes BB9040016AE4202@1, BB9020016AE4202@2, BB9010016AE4202@3 ns bar
WARNING: The following node(s) are not found in the observed list or have a
different configured rack-id: BB9020016AE4202@2, BB9040016AE4202@1, BB9010016AE4202@3
You are about to set the pending-roster for namespace bar to: BB9040016AE4202@1, BB9020016AE4202@2, BB9010016AE4202@3
Confirm that you want to proceed by typing 5de1f4, or cancel by typing anything else.
5de1f4
Pending roster successfully set.
Run "manage recluster" for your changes to take affect.
Adding Nodes to the Pending Roster
(Introduced: 2.5.0)
Access Control Permissions: sys-admin
The manage roster add nodes <node1[@rack1]> [<node2[@rack2]> [...]] ns <ns>
command allows you to
add nodes to the pending roster. The --warn
flag is on by default because of the importance of this command.
If you would like to disable the warning use the --no-warn
flag.
Admin+> manage roster add nodes BB9040016AE4202@1, BB9020016AE4202@2, BB9010016AE4202@3 ns bar --no-warn
Node(s) successfully added to pending-roster.
Run "manage recluster" for your changes to take affect.
Removing Nodes from the Pending Roster
(Introduced: 2.5.0)
Access Control Permissions: sys-admin
The manage roster remove nodes <node1[@rack1]> [<node2[@rack2]> [...]] ns <ns>
command allows you to
remove nodes from the pending roster. The --warn
flag is on by default because of the importance of this command.
If you would like to disable the warning use the --no-warn
flag.
Admin+> manage roster remove nodes BB9040016AE4202@1, BB9020016AE4202@2, BB9010016AE4202@3 ns bar --no-warn
Node(s) successfully removed from pending-roster.
Run "manage recluster" for your changes to take affect.
Secondary Indexes
(Introduced: 2.1.0)
The manage sindex
commands are used to create and delete secondary indexes (sindex) from
an Aerospike cluster. The show sindex
command should be used in conjunction with
manage sindex
commands.
Creating Secondary Indexes
(Introduced: 2.1.0)
Access Control Permissions: user-admin
The manage sindex create <bin-type> <index-name> ns <ns> [set <set>] bin <bin-name> [in <index-type>] [ctx <context>]
command is used for creating secondary indexes (sindex). The <bin-type>
is the bin type of
the provided <bin-name>
and should be one of the following values: numeric
, string
, or geo2dsphere
.
The <ns>
argument defines the namespace to create the sindex on. Optionally,
<set>
is used to define the set to create the secondary index on. See note below about <set>
. The <bin-name>
defines the bin to create the secondary
index on. The <index-type>
defines
how a bin's value should be used to create a secondary index. Possible values are:
list
to use the elements of a list as keys, mapkeys
to use the keys of a map
as keys, and mapvalues
to use the values of a map as keys. The default
specifies to use the contents of a bin as keys.
In server 6.1 and tools 7.2 and newer, sindexes
may be created on CDTs. CDTs are referenced using a context.
The <context>
is a space-separated list. Possible elements of
the list are as follows:
list_index(<index>)
list_rank(<rank>)
list_value(<value>)
map_index(<index>)
map_rank(<rank>)
map_key(<key>)
may_value(<value>)
Where <index>
and <rank>
are integers, <key>
is an integer, string, or base64 encoded byte string, and <value>
includes the values of <key>
with the addition of booleans and floats.
By default, providing a value for <key>
or <value>
will be interpreted as a string unless the following specifiers are used: int(<int>)
, bytes(<base64>)
, bool(<true|false>)
, or float(<float>)
.
i.e. int(1)
, bytes(YWVyb3NwaWtlCg==)
, bool(true)
, or float(3.14159)
In server 5.7 and earlier, not providing a <set>
creates an sindex on all records in a namespace without a set (in the null set). In server 6.1 and later, not providing a <set>
creates an sindex on all records in a namespace regardless of their set.
To create a sindex on records in namespace StarWars and set BountyHunters with a bin age:
Example Record:
{
name-bin: "Bobafet",
age-bin: 57
}
You could run
Admin+> manage sindex create numeric age-index ns StarWars set BountyHunters bin age-bin
Use 'show sindex' to confirm 'age-index' was created successfully
Starting with server 6.1 you can now create sindexes on bins containing CDTs. For example, if a bin has a List CDT containing people sorted from youngest to oldest:
Example Record:
{
people-bin: [
{
first-name: "Timmy",
age: 12
},
{
first-name: "Sally",
age: 15
},
{
first-name: "Jesse",
age: 27
}
]
}
To create a sindex on the people-bin eldest persons 'first-name' you could run the following:
Admin+> manage sindex create string eldest-name ns test bin people-bin ctx list_index(-1) map_key(first-name)
Use 'show sindex' to confirm 'eldest-name' was created successfully
Deleting Secondary Indexes
(Introduced: 2.1.0)
Access Control Permissions: data-admin or sys-admin
The manage sindex delete <index-name> ns <ns> [set <set>]
command is used for
deleting secondary indexes (sindex). The ns
argument is the namespace the sindex was
created on. If the sindex was also created on a set then the set
argument is
required.
Admin+> manage sindex delete age-index ns test
Successfully deleted sindex age-index
User Defined Functions
(Introduced: 2.1.0)
The manage udfs
commands are used to add and remove udf module from an Aerospike
cluster. The show udfs
command should be used in conjunction with
manage udfs
commands.
Adding a UDF
(Introduced: 2.1.0)
Access Control Permissions: data-admin or sys-admin
The manage udfs add <module-name> path <module-path>
command allows a user to register
a udf module. The <module-name>
must include a file extension. The path
argument
can be a relative or absolute path and are checked in that order. This command
can also be used to update an existing module.
Admin+> manage udfs add test.lua path path/to/test.lua
Successfully added UDF test.lua
Removing a UDF
(Introduced: 2.1.0)
Access Control Permissions: data-admin or sys-admin
The manage udfs remove <module-name>
command allows a user
to un-register an existing udf module.
Admin+> manage udfs remove test.lua
Successfully removed UDF test.lua
Features
(Introduced: 0.0.15)
The features
command displays features used in cluster. It supports like and with modifiers.
Example Output:
Admin> features
~~~~~~~~~~~Features (2020-12-18 02:09:28 UTC)~~~~~~~~~~~~
Node |10.0.0.1:3000|10.0.0.2:3000|10.0.0.3:3000
AGGREGATION |NO |NO |NO
BATCH |NO |NO |NO
INDEX-ON-DEVICE|NO |NO |NO
INDEX-ON-PMEM |NO |NO |NO
KVS |YES |YES |YES
LDT |NO |NO |NO
QUERY |NO |NO |NO
RACK-AWARE |NO |NO |NO
SC |NO |NO |NO
SCAN |NO |NO |NO
SECURITY |NO |NO |NO
SINDEX |NO |NO |NO
TLS (FABRIC) |NO |NO |NO
TLS (HEARTBEAT)|NO |NO |NO
TLS (SERVICE) |NO |NO |NO
UDF |NO |NO |NO
XDR DESTINATION|NO |NO |NO
XDR SOURCE |NO |NO |NO
Number of rows: 19
Summary
(Introduced: 0.1.9)
The summary
command displays summary of cluster. This command accepts remote server credentials to collect system statistics and show them in summary. By default it collects Aerospike data from all nodes but system statistics only from the localhost (if it is a node of a connected cluster).
To enable remote system statistics collection, one can use —-enable-ssh
option. This command accepts more ssh credentials through the following options:
—-ssh-user
, —-ssh-pwd
, —-ssh-port
, and —-ssh-key
. Also one can provide all credentials through a file
by using the option —-ssh-cf
. Refer to help summary
for further details. For a better "Usage Unique(Data)" summary one can provide the agent host and agent port of the UDA with the --agent-host
and --agent-port
options respectively. By default UDA entries where the cluster is reportedly unstable (migration, etc.) are filtered out. To include these entries use the --agent-unstable
flag.
Tools package 7.1.1 or later is required to use asadm's integration with the UDA
Example Output:
Admin> summary -l
Cluster
=======
1. Server Version : E-5.7.0.5
2. OS Version :
3. Cluster Size : 3
4. Devices : Total 1, per-node 1
5. Memory : Total 3.750 GB, 0.06% used (2.183 MB), 99.94% available (3.748 GB)
6. Pmem Index : Total 3.000 GB, 0.00% used (0.000 B), 100.00% available (3.000 GB)
7. Disk : Total 0.000 B, 0.00% used (0.000 B), 0.00% available contiguous space (0.000 B)
8. Usage (Unique Data): Latest: 625.000 KB Max: 805.000 KB Min: 0.000 KB Avg: 632.000 KB
9. Active Namespaces : 1 of 1
10. Features : KVS, Query, Rack-aware, SC, SINDEX, Scan
Namespaces
==========
test
====
1. Devices : Total 1, per-node 1
2. Memory : Total 3.750 GB, 0.06% used (2.183 MB), 99.94% available (3.748 GB)
3. Pmem Index : Total 3.000 GB, 0.00% used (0.000 B), 100.00% available (3.000 GB)
4. Disk : Total 0.000 B, 0.00% used (0.000 B), 0.00% available contiguous space (0.000 B)
5. Replication Factor : 2
6. Rack-aware : False
7. Master Objects : 1.307 K
8. Compression-ratio : 1.0
Collectinfo
The collectinfo
command collects snapshots of cluster information (statistics, and configurations) and aerospike conf file for the local node it is run from.
It also collects system statistics of all nodes if remote server credentials are provided, otherwise it collects system stats for the local node only.
To collect more than one snapshot use -n
to specify the number of snapshots and -s
to specify the sleep time between snapshots.
By default collectinfo
collects Aerospike data from all nodes but system statistics only from localhost (if it is a node of connected cluster).
To enable remote system statistics collection, one can use the —-enable-ssh
option. This command accepts more ssh credentials
through options like —-ssh-user
, —-ssh-pwd
, —-ssh-port
, and —-ssh-key
. Also users can provide all credentials
through file by using option —-ssh-cf
. Please check help collectinfo
for more details.
Tools package 7.1.1 or later is required to use asadm's integration with the UDA
Optionally, if the cluster has a UDA running on the network you can collect license data usage for a more accurate picture
of data usage. To enable this features use the --agent-host
and --agent-port
options. Furthermore, to collect the uda's store file use the
--agent-store
flag.
Pager
(Introduced: 0.0.17)
The pager
command sets pager for output. For output which can not fit in
output console, this command gives option to scroll each output table vertically as well as
horizontally.
Other Commands
Asinfo
The asinfo
command provides raw access to Aerospike info protocol. With it
you can change live configurations and view a wide array of technical data for
the cluster. To access the asinfo
the user must enter a privileged
mode by typing enable
. Please see enable. For a list of command strings see
asinfo documentation. The asinfo command allows
the user to copy paste commands from the command line asinfo tool and execute
them across the entire cluster.
Unlike the command line tool, to select specific nodes you need to use the with modifier and you can filter the results with the like modifier.
The below asinfo
command will retrieve the configurations from all nodes
and filter for configurations containing the work "batch".
Admin+> asinfo -v get-config like batch
172.16.245.231 (172.16.245.231) returned:
batch-max-requests=5000;query-batch-size=100
172.16.245.232 (172.16.245.232) returned:
batch-max-requests=5000;query-batch-size=100
172.16.245.233 (172.16.245.233) returned:
batch-max-requests=5000;query-batch-size=100
172.16.245.234 (172.16.245.234) returned:
batch-max-requests=5000;query-batch-size=100
Watch
The watch
command should come before another asadm command and has two
optional fixed-position arguments. The first position is the number of seconds
to wait between iterations and the second position is the number of iterations
to execute.
The example below will run info network
3 times with a 5 second sleep
between iterations. Though not visible here, it will also highlights changes.
Admin> watch 5 3 info network
[ 2020-12-17 18:11:41 'info network' sleep: 5.0s iteration: 1 of 3 ]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Network Information (2020-12-18 02:11:41 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Node| Node ID| IP| Build|Migrations|~~~~~~~~~~~~~~~~~~Cluster~~~~~~~~~~~~~~~~~~|Client| Uptime
| | | | |Size| Key|Integrity| Principal| Conns|
10.0.0.1:3000| BB9010016AE4202| 10.0.0.1:3000|C-5.3.0.1| 0.000 | 5|33718FC58CD6|True |BB9060016AE4202| 4|02:20:24
10.0.0.2:3000| BB9020016AE4202| 10.0.0.2:3000|C-5.3.0.1| 0.000 | 5|33718FC58CD6|True |BB9060016AE4202| 4|02:20:23
Number of rows: 2
[ 2020-12-17 18:11:46 'info network' sleep: 5.0s iteration: 2 of 3 ]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Network Information (2020-12-18 02:11:46 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Node| Node ID| IP| Build|Migrations|~~~~~~~~~~~~~~~~~~Cluster~~~~~~~~~~~~~~~~~~|Client| Uptime
| | | | |Size| Key|Integrity| Principal| Conns|
10.0.0.1:3000| BB9010016AE4202| 10.0.0.1:3000|C-5.3.0.1| 0.000 | 5|33718FC58CD6|True |BB9060016AE4202| 3|02:20:29
10.0.0.2:3000| BB9020016AE4202| 10.0.0.2:3000|C-5.3.0.1| 0.000 | 5|33718FC58CD6|True |BB9060016AE4202| 3|02:20:28
Number of rows: 2
[ 2020-12-17 18:11:51 'info network' sleep: 5.0s iteration: 3 of 3 ]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Network Information (2020-12-18 02:11:51 UTC)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Node| Node ID| IP| Build|Migrations|~~~~~~~~~~~~~~~~~~Cluster~~~~~~~~~~~~~~~~~~|Client| Uptime
| | | | |Size| Key|Integrity| Principal| Conns|
10.0.0.1:3000| BB9010016AE4202| 10.0.0.1:3000|C-5.3.0.1| 0.000 | 5|33718FC58CD6|True |BB9060016AE4202| 3|02:20:34
10.0.0.2:3000| BB9020016AE4202| 10.0.0.2:3000|C-5.3.0.1| 0.000 | 5|33718FC58CD6|True |BB9060016AE4202| 3|02:20:33
Number of rows: 2