Skip to main content
Loading

Configuring the Kafka Sink (Inbound) Connector

The Aerospike Kafka sink (inbound) connector reads data from Apache Kafka and writes data to Aerospike.

Configuring streaming from Kafka to Aerospike involves setting the Kafka sink Connector to transform Kafka records into Aerospike records. This is driven by a YAML configuration file located at <connector-directory>/etc/aerospike-kafka-inbound.yml on each Kafka connect node.

The configuration has the following options:
OptionRequiredDefaultDescription
max-queued-recordsno32768The maximum number of records queued up with the connector. The size of the queue can go over this before topics are paused.
All topics resume once the size of the queue drops under half of the maximum size.
processing-threadsnoAvailable processorsNumber of threads to use for processing Kafka records and converting them to Aerospike records.
aerospikeyesConfigures the connection properties that the connector must use when connecting to your Aerospike database.
topicsyesConfigures the Kafka topics the connector listens to and the transformations to Aerospike records.

Here is an example:

max-queued-records: 10000

aerospike:
seeds:
- 192.168.50.1:
port: 3000
tls-name: red
- 192.168.50.2
cluster-name: east

topics:
users:
invalid-record: ignore
mapping:
namespace:
mode: static
value: users
set:
mode: dynamic
source: value-field
field-name: city
key-field:
source: key
ttl:
mode: dynamic
source: value-field
field-name: ttl
bins:
type: multi-bins
map:
name:
source: value-field
field-name: firstName