Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Sink connector format option #12

Open
wants to merge 13 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 31 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,14 @@ Kafka topic to write the messages to.

*Type:* List

##### `rabbitmq.queue.topic.mapping`
*Importance:* High

*Type:* List

A list containing a mapping between a RabbitMQ queue and a Kafka topic.
This setting is an alternative for the 'rabbitmq.queue' and 'kafka.topic' setting. This allows to use a single connector instance to have a many-to-many mapping, instead of only a many queues to one topic mapping.
When both settings are present. The 'rabbitmq.queue' and 'kafka.topic' will be used. Example of mapping config: 'queue1:topic1,queue2:topic2'

rabbitmq.queue
##### `rabbitmq.host`
Expand Down Expand Up @@ -63,15 +71,22 @@ The username to authenticate to RabbitMQ with. See `ConnectionFactory.setUsernam

*Default Value:* /

Converter to compose the Kafka message.
The virtual host to use when connecting to the broker. See `ConnectionFactory.setVirtualHost(java.lang.String) <https://www.rabbitmq.com/releases/rabbitmq-java-client/current-javadoc/com/rabbitmq/client/ConnectionFactory.html#setVirtualHost-java.lang.String->`_

##### `message.converter`
*Importance:* Medium

*Type:* String

*Default Value:* com.github.themeetgroup.kafka.connect.rabbitmq.source.data.MessageConverter

The virtual host to use when connecting to the broker. See `ConnectionFactory.setVirtualHost(java.lang.String) <https://www.rabbitmq.com/releases/rabbitmq-java-client/current-javadoc/com/rabbitmq/client/ConnectionFactory.html#setVirtualHost-java.lang.String->`_
*Other allowed values*:
- com.github.themeetgroup.kafka.connect.rabbitmq.source.data.BytesSourceMessageConverter
- com.github.themeetgroup.kafka.connect.rabbitmq.source.data.StringSourceMessageConverter

Converter to compose the Kafka message.


##### `rabbitmq.port`
*Importance:* Medium

Expand Down Expand Up @@ -256,6 +271,20 @@ exchange to publish the messages on.


routing key used for publishing the messages.

##### `rabbitmq.format`
*Importance:* High

*Type:* String

*Default Value:* bytes

*Other allowed values*:
- json
- avro (non Confluent avro)

The format type to use when writing data to RabbitMQ

##### `topics`
*Importance:* High

Expand Down
3 changes: 2 additions & 1 deletion bin/create-topic.sh → bin/create-topics.sh
Original file line number Diff line number Diff line change
Expand Up @@ -15,4 +15,5 @@
# limitations under the License.
#

kafka-topics --create --topic rabbitmq.test --bootstrap-server 127.0.0.1:9092
kafka-topics --create --topic topic1 --bootstrap-server 127.0.0.1:9092
kafka-topics --create --topic topic2 --bootstrap-server 127.0.0.1:9092
2 changes: 1 addition & 1 deletion bin/debug.sh
Original file line number Diff line number Diff line change
Expand Up @@ -21,5 +21,5 @@ export KAFKA_DEBUG='y'

set -e

mvn clean package
mvn clean package -Dcheckstyle.skip
connect-standalone config/connect-avro-docker.properties config/RabbitMQSourceConnector.properties
2 changes: 1 addition & 1 deletion bin/read-topic-with-headers.sh
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
# limitations under the License.
#

kafkacat -b localhost:9092 -t rabbitmq.test -C \
kafkacat -b localhost:9092 -t topic1 -C \
-f '\nKey (%K bytes): %k
Value (%S bytes): %s
Timestamp: %T
Expand Down
7 changes: 7 additions & 0 deletions config/RabbitMQSinkConnector.properties
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
name=rabbitmq-sink
tasks.max=1
connector.class=com.github.themeetgroup.kafka.connect.rabbitmq.sink.RabbitMQSinkConnector
rabbitmq.exchange=exchange
rabbitmq.routing.key=routingkey
rabbitmq.format=json
topics=rabbitmq-test
7 changes: 7 additions & 0 deletions config/RabbitMQSourceConnector.properties
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
name=rabbitmq-source
tasks.max=1
connector.class=com.github.themeetgroup.kafka.connect.rabbitmq.source.RabbitMQSourceConnector
#rabbitmq.queue=test1,test2
#kafka.topic=rabbitmq-test
rabbitmq.queue.topic.mapping=test1:topic1,test2:topic2
message.converter=com.github.themeetgroup.kafka.connect.rabbitmq.source.data.BytesSourceMessageConverter
6 changes: 6 additions & 0 deletions config/connect-avro-docker.properties
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
bootstrap.servers=localhost:9092
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081
offset.storage.file.filename=/tmp/connect.offsets
plugin.path=target/kafka-connect-target/usr/share/kafka-connect
100 changes: 88 additions & 12 deletions pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,13 @@
</license>
</licenses>

<repositories>
<repository>
<id>Confluent</id>
<url>https://packages.confluent.io/maven/</url>
</repository>
</repositories>

<developers>
<developer>
<id>jcustenborder</id>
Expand All @@ -32,13 +39,21 @@
</roles>
</developer>
<developer>
<id>insidn</id>
<id>insidin</id>
<name>Jan Uyttenhove</name>
<url>https://github.com/insidin</url>
<roles>
<role>Committer</role>
</roles>
</developer>
<developer>
<id>jelledv</id>
<name>Jelle De Vleminck</name>
<url>https://github.com/jelledv</url>
<roles>
<role>Committer</role>
</roles>
</developer>
</developers>

<scm>
Expand All @@ -55,32 +70,64 @@
<properties>
<rabbitmq.version>5.10.0</rabbitmq.version>
<mockito.version>3.3.0</mockito.version>
<kafka.version>2.6.0</kafka.version>
<confluent.version>6.0.0</confluent.version>
<!-- Keep those versions in sync with version used in Kafka releases -->
<jackson.version>2.8.5</jackson.version>
<avro.version>1.9.2</avro.version>
</properties>

<dependencies>
<dependency>
<groupId>io.confluent</groupId>
<artifactId>kafka-connect-avro-converter</artifactId>
<version>${confluent.version}</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.datatype</groupId>
<artifactId>jackson-datatype-jsr310</artifactId>
<version>${jackson.version}</version>
</dependency>

<dependency>
<groupId>com.rabbitmq</groupId>
<artifactId>amqp-client</artifactId>
<version>${rabbitmq.version}</version>
</dependency>
<dependency>
<groupId>com.github.jcustenborder.kafka.connect</groupId>
<artifactId>connect-utils-testing-data</artifactId>
<version>${connect-utils.version}</version>
</dependency>
<dependency>
<groupId>org.apache.avro</groupId>
<artifactId>avro</artifactId>
<version>1.8.2</version>
<groupId>org.apache.kafka</groupId>
<artifactId>connect-json</artifactId>
<version>${kafka.version}</version>
</dependency>
<dependency>
<groupId>io.confluent</groupId>
<artifactId>kafka-avro-serializer</artifactId>
<version>5.1.0</version>
<groupId>org.apache.kafka</groupId>
<artifactId>connect-runtime</artifactId>
<version>${kafka.version}</version>
<scope>provided</scope>
</dependency>

</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<version>3.1.1</version>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
</configuration>
<executions>
<execution>
<id>fat-jar</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-javadoc-plugin</artifactId>
Expand Down Expand Up @@ -118,6 +165,35 @@
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.avro</groupId>
<artifactId>avro-maven-plugin</artifactId>
<version>${avro.version}</version>
<configuration>
<sourceDirectory>${project.basedir}/src/test/resources</sourceDirectory>
<imports>
<import>${project.basedir}/src/test/resources/payment.avsc</import>
</imports>
<enableDecimalLogicalType>true</enableDecimalLogicalType>
<fieldVisibility>private</fieldVisibility>
</configuration>
<executions>
<execution>
<id>second</id>
<phase>generate-sources</phase>
<goals>
<goal>schema</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
<configuration>
<skip>true</skip>
</configuration>
</plugin>
</plugins>
</build>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@
import com.github.jcustenborder.kafka.connect.utils.template.StructTemplate;

public class RabbitMQSinkConnectorConfig extends CommonRabbitMQConnectorConfig {

static final String KAFKA_TOPIC_TEMPLATE = "kafkaTopicTemplate";
public static final String TOPIC_CONF = "topics";
static final String TOPIC_DOC = "Kafka topic to read the messages from.";
Expand All @@ -36,14 +37,17 @@ public class RabbitMQSinkConnectorConfig extends CommonRabbitMQConnectorConfig {
public static final String ROUTING_KEY_CONF = "rabbitmq.routing.key";
static final String ROUTING_KEY_DOC = "routing key used for publishing the messages.";

public static final String FORMAT_CONF = "rabbitmq.format";
public static final String FORMAT_CONF_DOC = "The format type to use when writing data to rabbitMQ";
public static final String FORMAT_CONF_DEFAULT = "bytes";

public static final String HEADER_CONF = "rabbitmq.headers";
public static final String HEADER_CONF_DOC = "Headers to set for outbounf messages. Set with `headername1`:`headervalue1`,`headername2`:`headervalue2`";
//TODO: include other config variables here

public final StructTemplate kafkaTopic;
public final String exchange;
public final String routingKey;
public final String format;

public RabbitMQSinkConnectorConfig(Map<String, String> settings) {
super(config(), settings);
Expand All @@ -52,16 +56,15 @@ public RabbitMQSinkConnectorConfig(Map<String, String> settings) {
this.kafkaTopic.addTemplate(KAFKA_TOPIC_TEMPLATE, kafkaTopicFormat);
this.exchange = this.getString(EXCHANGE_CONF);
this.routingKey = this.getString(ROUTING_KEY_CONF);
this.format = this.getString(FORMAT_CONF);
}

public static ConfigDef config() {
return CommonRabbitMQConnectorConfig.config()
.define(TOPIC_CONF, ConfigDef.Type.STRING, ConfigDef.Importance.HIGH, TOPIC_DOC)
.define(EXCHANGE_CONF, ConfigDef.Type.STRING, "", ConfigDef.Importance.MEDIUM, EXCHANGE_DOC)
.define(ROUTING_KEY_CONF, ConfigDef.Type.STRING, ConfigDef.Importance.HIGH, ROUTING_KEY_DOC)
.define(HEADER_CONF, ConfigDef.Type.STRING, null, null, ConfigDef.Importance.LOW, HEADER_CONF_DOC);


.define(TOPIC_CONF, ConfigDef.Type.STRING, ConfigDef.Importance.HIGH, TOPIC_DOC)
.define(EXCHANGE_CONF, ConfigDef.Type.STRING, "", ConfigDef.Importance.MEDIUM, EXCHANGE_DOC)
.define(ROUTING_KEY_CONF, ConfigDef.Type.STRING, ConfigDef.Importance.HIGH, ROUTING_KEY_DOC)
.define(FORMAT_CONF, ConfigDef.Type.STRING, FORMAT_CONF_DEFAULT, ConfigDef.Importance.HIGH, FORMAT_CONF_DOC)
.define(HEADER_CONF, ConfigDef.Type.STRING, null, null, ConfigDef.Importance.LOW, HEADER_CONF_DOC);
}

}
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@
package com.github.themeetgroup.kafka.connect.rabbitmq.sink;

import com.github.jcustenborder.kafka.connect.utils.VersionUtil;
import com.github.themeetgroup.kafka.connect.rabbitmq.sink.format.RecordFormatter;
import com.rabbitmq.client.Channel;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.ConnectionFactory;
Expand All @@ -35,12 +36,12 @@
import static com.github.themeetgroup.kafka.connect.rabbitmq.sink.RabbitMQSinkConnectorConfig.HEADER_CONF;

public class RabbitMQSinkTask extends SinkTask {
private static final Logger log = LoggerFactory.getLogger(RabbitMQSinkTask.class);
RabbitMQSinkConnectorConfig config;

Channel channel;
Connection connection;

private static final Logger log = LoggerFactory.getLogger(RabbitMQSinkTask.class);
private RabbitMQSinkConnectorConfig config;
private RecordFormatter recordFormatter;
private Channel channel;
private Connection connection;

@Override
public String version() {
Expand All @@ -51,12 +52,9 @@ public String version() {
public void put(Collection<SinkRecord> sinkRecords) {
for (SinkRecord record : sinkRecords) {
log.trace("current sinkRecord value: " + record.value());
if (!(record.value() instanceof byte[])) {
throw new ConnectException("the value of the record has an invalid type (must be of type byte[])");
}
try {
channel.basicPublish(this.config.exchange, this.config.routingKey,
RabbitMQSinkHeaderParser.parse(config.getString(HEADER_CONF)), (byte[]) record.value());
RabbitMQSinkHeaderParser.parse(config.getString(HEADER_CONF)), recordFormatter.format(record));
} catch (IOException e) {
log.error("There was an error while publishing the outgoing message to RabbitMQ");
throw new RetriableException(e);
Expand All @@ -67,6 +65,7 @@ public void put(Collection<SinkRecord> sinkRecords) {
@Override
public void start(Map<String, String> settings) {
this.config = new RabbitMQSinkConnectorConfig(settings);
this.recordFormatter = RecordFormatter.getInstance(config.format);
ConnectionFactory connectionFactory = this.config.connectionFactory();
try {
log.info("Opening connection to {}:{}/{} (SSL: {})", this.config.host, this.config.port, this.config.virtualHost, this.config.useSsl);
Expand Down
Loading