Bulk Data Loading Procedures
You can load graph data into an Aerospike database
efficiently with the Aerospike Graph bulk data loader and the
Gremlin call
API.
Bulk loading with the Gremlin call
step
The bulk loader can only load data into an empty database.
Requirements
- A running Aerospike Graph Service (AGS) instance. See Getting Started for help with getting an AGS instance up and running.
- A running Aerospike Database instance, version 6.2.0.7 or higher.
- Data files for edges and vertices in the Gremlin CSV format.
Source data files
The bulk loader accepts data files in the Gremlin CSV format, with vertices and edges specified in separate files. All CSV files should have header information with names for each column of data.
Aerospike Graph does not support user-provided ~id
values for edges, so the ~id
column
is optional for edge CSV files. If your CSV file contains an ~id
column, the values
can be preserved as an edge property if you set the
aerospike.graphloader.keep-provided-edge-id-as-property
configuration option to
true
. You can configure the property key for this value by using the
aerospike.graphloader.provided-edge-id-property-name
configuration option.
CSV data files can either be local or in cloud-based storage. Cloud-based
data files can be stored either in Amazon AWS S3 or Google Cloud
Storage. Data files should be stored in directories specified by the
aerospike.graphloader.vertices
and aerospike.graphloader.edges
configuration options. The specified
directories must contain at least one subdirectory containing one or
more CSV files.
For example, if you have an S3 bucket named myBucket
, that bucket should
contain separate directories for edge and vertex data files, and those directories
should contain subdirectories for the CSV files.
If the aerospike.graphloader.vertices
configuration option
is set to s3://myBucket/vertices
, you might have subdirectories named
s3://myBucket/vertices/people
and s3://myBucket/vertices/places
, each
containing one or more CSV files.
When using cloud-based source data files, be sure to include your cloud
service credentials with the call
function. The required parameters for
cloud service credentials are listed in the Cloud storage configuration
options section.
Bulk loading with local files
You can bulk load local files with the Gremlin call
step by specifying their
location in the Gremlin command. Use the aerospike.graphloader.vertices
and aerospike.graphloader.edges
options to
specify file directory locations.
Local files must be accessible to the AGS Docker image. Specify local file
locations in your Docker run
command.
The call
API runs the bulk loader on a single AGS instance.
Aerospike Graph runs in Docker, so any local file paths that you pass to call
must be accessible to the Docker image. If you are using local
source data files, you must mount these in the Docker image to
make them accessible to the bulk loader. More information about mounting directories
is available in the Docker volumes documentation.
In this example, we have the following directories:
/home/graph-user/graph/data/docker-bulk-load/sampledata/vertices/
/home/graph-user/graph/data/docker-bulk-load/sampledata/edges/
When we mount /home/graph-user/graph/data/docker-bulk-load/
to
/opt/aerospike/etc/
the container sees all the subdirectories
below /opt/aerospike/etc/
, including sampledata/*
. That is
reflected in the paths specified in the call
step below.
docker run -p 8182:8182 \
-v /home/graph-user/graph/data/docker-bulk-load/:/opt/aerospike/etc/
aerospike/aerospike-graph-service
If you are using cloud storage for your data source files, it's not necessary
to specify their location in the Docker run
command.
When using the -v
flag, the path on the left side of the :
character is your
local path, and the path on the right side of the :
character is the path
within your AGS Docker image.
To specify your file locations in the Gremlin command, use the with
step:
g.with("evaluationTimeout", 24 * 60 * 60 * 1000).call("bulk-load").with("aerospike.graphloader.vertices", "/opt/aerospike/etc/sampledata/vertices").with("aerospike.graphloader.edges", "/opt/aerospike/etc/sampledata/edges")
Bulk loading with remote files
The bulk loader supports remote data files stored in Google Storage Buckets
on GCP and S3 buckets on AWS. You can
specify remote file locations and the credentials necessary to reach them in the
call
step. Use the aerospike.graphloader.vertices
and aerospike.graphloader.edges
options to specify file
directory locations, and use any necessary credential options
to authenticate with your cloud provider.
The following example Gremlin command uses the bulk loader to add data from source data files stored in an AWS S3 bucket to an Aerospike Graph database:
g.with("evaluationTimeout", 24 * 60 * 60 * 1000).call("bulk-load").with("aerospike.graphloader.vertices", "s3://myBucket/vertices").with("aerospike.graphloader.edges", "s3://myBucket/edges").with("aerospike.graphloader.remote.user", "AWS_ACCESS_KEY_ID").with("aerospike.graphloader.remote.passkey", "AWS_SECRET_ACCESS_KEY")
The evaluationTimeout
parameter
All bulk loader operations should include the evaluationTimout
parameter:
- Java
- Python
- Groovy (Gremlin Console)
.with("evaluationTimeout", 24L * 60L * 60L * 1000L)
.with('evaluationTimeout', 24 * 60 * 60 * 1000)
.with("evaluationTimeout", 24L * 60L * 60L * 1000L)
This parameter prevents the bulk loading operation from timing out when running for extended periods. In the above example, the timeout is set to 24 hours in milliseconds. You can adjust it as necessary.
Certain Gremlin language variants may expect the numeric value to be of type
Long
, as shown in the Java example.
Configuration options
You can specify configuration options as
part of the Gremlin call
step.
The following options are available for the bulk loader:
Name | Optional | Default | Description |
---|---|---|---|
aerospike.graphloader.edges | no | none | The path to the directory or cloud storage location where CSV files containing edge data are stored. This directory may contain subdirectories with CSV files. |
aerospike.graphloader.vertices | no | none | The path to the directory or cloud storage location where CSV files containing vertex data are stored. This directory may contain subdirectories with CSV files. |
aerospike.graphloader.keep-provided-edge-id-as-property | yes | false | If this option is set to true and the edge CSV file contains an ~id field, that field is stored as an edge property with a key specified by the field aerospike.graphloader.provided-edge-id-property-name . If this option is set to false and the edge CSV file contains an ~id field, information in that column is ignored. |
aerospike.graphloader.provided-edge-id-property-name | yes | ~providedId | Property name to use for data stored in the ~id field of the CSV file, if any. |
aerospike.graphloader.sampling-percentage | yes | 0 | Specifies a percentage of the input data to be sampled for verifying that the bulk loading job was successful. |
aerospike.graphloader.null-value | yes | null | Specifies a string which Graph should parse as a literal null value for properties. The null character \0 is a good alternative choice for this. |
aerospike.graphloader.vertex-write-buffer | yes | 10000 | Write buffer size for vertex loading. |
aerospike.graphloader.edge-write-buffer | yes | 10000 | Write buffer size for edge loading. |
Cloud storage configuration options
The Bulk Loader supports cloud-based source data storage locations. If your edge and vertex CSV files are stored in AWS S3 or Google Cloud Storage buckets, the following configuration options are relevant.
These options may be optional or required, depending on the remote environment. Check your cloud service documentation for details.
Name |
---|
aerospike.graphloader.remote.user |
aerospike.graphloader.remote.passkey |
aerospike.graphloader.gcs-email |
aerospike.graphloader.gcs-keyfile |
Additional cloud considerations
- AWS
- GCS
- Populate the
aerospike.graphloader.remote.user
option with your AWSAWS_ACCESS_KEY_ID
value. - Populate the
aerospike.graphloader.remote.passkey
option with your AWSAWS_SECRET_ACCESS_KEY
value. - The AWS options are required for the Gremlin
call
step unless the Graph Docker environment is preconfigured with AWS credentials. - The GCS options are not applicable.
When using Google Cloud Storage for source data files, you must configure a GCS Service Account. When running the bulk loader, specify the following options:
aerospike.graphloader.remote.user
: your GCSprivate_key_id
valueaerospike.graphloader.remote.passkey
: your GCSprivate_key
valueaerospike.graphloader.gcs-email
: your GCSclient_email
value
These values can be found in the key file JSON generated for the GCS Service Account.
Alternatively, if the AGS Docker environment has access to the key file
itself, you can instead specify the path to the file via the
aerospike.graphloader.gcs-keyfile
option, but it is usually easier
to specify the three options as part of the call
function.