Skip to main content
Loading

Modifying an Existing Configuration of the Kafka Sink Connector

You can modify the configuration of the Kafka inbound (sink) connector even after the connector is deployed. In both the standalone and distributed mode, the first step is to modify the config file.

Edit the configuration file

Edit the file <connector-directory>/etc/aerospike-kafka-inbound.yml. See configuration for details.

Standalone Mode

Kill the running connector

  1. Use ps aux to list the processes running on the system and to locate the JVM process that is running the connector.
  2. Note the ID of the process.
  3. Send a kill signal to the process by running this command:
kill -9 <pid>

<pid>: The ID of the process.

Restart the connector

Run the following command:

<kafka-dir>/bin/connect-standalone.sh <path-to-your-Kafka-Connect-config-file>  <path to aerospike-sink.properties>
  • <kafka-dir>: The directory where the Kafka package is located.
  • <path-to-your-Kafka-Connect-config-file>: The path to the file (including the filename and extension) that you are using to configure the worker in Kafka Connect.
  • <path-to-aerospike-sink.properties>: The path to the file (including the filename and extension) that you created when you deployed the connector. See "Standalone mode" in step 4 of "Deploying the Kafka Inbound Connector".

Distributed Mode

View existing config

To view the current configuration that is being used by the connector, issue this request to Kafka Connect's REST interface:

GET /connectors/aerospike-sink/tasks HTTP/1.1
Host: <hostname or IP address>

<hostname or IP address>: The hostname or IP address of any of the Kafka Connect nodes. As stated in the Connect REST Interface page of the Kafka Connect documentation, you "can make requests to any cluster member; the REST API automatically forwards requests if required."

Copy the edited file

Copy the edited configuration file to each of the other Kafka Connect nodes, replacing the previous version of the file.

Update the connector

  1. Set this variable:
    aerosink =
    {
    "connector.class": "com.aerospike.connect.kafka.inbound.AerospikeSinkConnector",
    "config-file": "/etc/aerospike-kafka-inbound/inbound.yml",
    "tasks.max": "<value>",
    "topics": "<value>"
    }
  • tasks.max: The maximum number of tasks that can be created for the connector. A task runs as a process in Kafka Connect.
  • topics: A list of comma-separated names of the topics for the connector to subscribe to.
  1. Set this variable:

    kafkaEndpoint="<URI>"

    kafkaEndpoint: This is the REST endpoint for the Kafka Connect service. You can make requests to any cluster member; the REST API automatically forwards requests, if required.

  2. Issue a request to Kafka Connect's REST interface. The request updates all of the connector tasks together.

    curl -X PUT --header "Content-Type:application/json" --data ${aerosink} ${endpoint}/connectors/aerospike-sink/config

Verify the changes

To verify, issue the same request that you issued in step 1 to view the configuration.