diff --git a/connectors/canal-source/v3.1.1.1/canal-source.md b/connectors/canal-source/v3.1.1.1/canal-source.md
index ca9e050af..e8c31da20 100644
--- a/connectors/canal-source/v3.1.1.1/canal-source.md
+++ b/connectors/canal-source/v3.1.1.1/canal-source.md
@@ -29,16 +29,16 @@ The configuration of Canal source connector has the following properties.
## Property
-| Name | Required | Default | Description |
-|------|----------|---------|-------------|
-| `username` | true | None | Canal server account (not MySQL).|
-| `password` | true | None | Canal server password (not MySQL). |
-|`destination`|true|None|Source destination that Canal source connector connects to.
-| `singleHostname` | false | None | Canal server address.|
-| `singlePort` | false | None | Canal server port.|
-| `cluster` | true | false | Whether to enable cluster mode based on Canal server configuration or not.
true: **cluster** mode.
If set to true, it talks to `zkServers` to figure out the actual database host.
false: **standalone** mode.
If set to false, it connects to the database specified by `singleHostname` and `singlePort`. |
-| `zkServers` | true | None | Address and port of the Zookeeper that Canal source connector talks to figure out the actual database host.|
-| `batchSize` | false | 1000 | Batch size to fetch from Canal. |
+| Name | Required | Sensitive | Default | Description |
+|------------------|----------|-----------|---------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `username` | true | true | None | Canal server account (not MySQL). |
+| `password` | true | true | None | Canal server password (not MySQL). |
+| `destination` | true | false | None | Source destination that Canal source connector connects to. |
+| `singleHostname` | false | false | None | Canal server address. |
+| `singlePort` | false | false | None | Canal server port. |
+| `cluster` | true | false | false | Whether to enable cluster mode based on Canal server configuration or not.
true: **cluster** mode.
If set to true, it talks to `zkServers` to figure out the actual database host.
false: **standalone** mode.
If set to false, it connects to the database specified by `singleHostname` and `singlePort`. |
+| `zkServers` | true | false | None | Address and port of the Zookeeper that Canal source connector talks to figure out the actual database host. |
+| `batchSize` | false | false | 1000 | Batch size to fetch from Canal. |
## Example
diff --git a/connectors/debezium-mongodb-source/v3.1.1.1/debezium-mongodb-source.md b/connectors/debezium-mongodb-source/v3.1.1.1/debezium-mongodb-source.md
index 004b77a37..d71c7f837 100644
--- a/connectors/debezium-mongodb-source/v3.1.1.1/debezium-mongodb-source.md
+++ b/connectors/debezium-mongodb-source/v3.1.1.1/debezium-mongodb-source.md
@@ -120,20 +120,20 @@ key:[eyJpZCI6IjQifQ==], properties:[], content:{"after":"{\"_id\": {\"$numberLon
## Configuration Properties
The configuration of Debezium Mongodb source connector has the following properties.
-| Name | Required | Default | Description |
-|------|----------|---------|-------------|
-| `mongodb.hosts` | true | null | The comma-separated list of hostname and port pairs (in the form 'host' or 'host:port') of the MongoDB servers in the replica set. The list contains a single hostname and a port pair. If mongodb.members.auto.discover is set to false, the host and port pair are prefixed with the replica set name (e.g., rs0/localhost:27017). |
-| `mongodb.name` | true | null | A unique name that identifies the connector and/or MongoDB replica set or shared cluster that this connector monitors. Each server should be monitored by at most one Debezium connector, since this server name prefixes all persisted Kafka topics emanating from the MongoDB replica set or cluster. |
-| `mongodb.user` | false | null | Name of the database user to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. |
-| `mongodb.password` | false | null | Password to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. |
-| `mongodb.task.id` | true | null | The taskId of the MongoDB connector that attempts to use a separate task for each replica set. |
-| `database.whitelist` | false | null | A list of all databases hosted by this server which is monitored by the connector.
By default, all databases are monitored. |
-| `key.converter` | false | null | The converter provided by Kafka Connect to convert record key. |
-| `value.converter` | false | null | The converter provided by Kafka Connect to convert record value. |
-| `database.history.pulsar.topic` | false | null | The name of the database history topic where the connector writes and recovers DDL statements.
**Note: this topic is for internal use only and should not be used by consumers.** |
-| `database.history.pulsar.service.url` | false | null | Pulsar cluster service URL for history topic. |
-| `offset.storage.topic` | false | null | Record the last committed offsets that the connector successfully completes. By default, it's `topicNamespace + "/" + sourceName + "-debezium-offset-topic"`. eg. `persistent://public/default/debezium-mongodb-source-debezium-offset-topic`|
-| `json-with-envelope`| false | false | The`json-with-envelope` config is valid only for the JsonConverter. By default, the value is set to false. When the `json-with-envelope` value is set to false, the consumer uses the schema `Schema.KeyValue(Schema.AUTO_CONSUME(), Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`, and the message only consists of the payload. When the `json-with-envelope` value is set to true, the consumer uses the schema `Schema.KeyValue(Schema.BYTES, Schema.BYTES)`, and the message consists of the schema and the payload. |
+| Name | Required | Sensitive | Default | Description |
+|---------------------------------------|----------|-----------|---------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `mongodb.hosts` | true | false | null | The comma-separated list of hostname and port pairs (in the form 'host' or 'host:port') of the MongoDB servers in the replica set. The list contains a single hostname and a port pair. If mongodb.members.auto.discover is set to false, the host and port pair are prefixed with the replica set name (e.g., rs0/localhost:27017). |
+| `mongodb.name` | true | false | null | A unique name that identifies the connector and/or MongoDB replica set or shared cluster that this connector monitors. Each server should be monitored by at most one Debezium connector, since this server name prefixes all persisted Kafka topics emanating from the MongoDB replica set or cluster. |
+| `mongodb.user` | false | true | null | Name of the database user to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. |
+| `mongodb.password` | false | true | null | Password to be used when connecting to MongoDB. This is required only when MongoDB is configured to use authentication. |
+| `mongodb.task.id` | true | false | null | The taskId of the MongoDB connector that attempts to use a separate task for each replica set. |
+| `database.whitelist` | false | false | null | A list of all databases hosted by this server which is monitored by the connector.
By default, all databases are monitored. |
+| `key.converter` | false | false | null | The converter provided by Kafka Connect to convert record key. |
+| `value.converter` | false | false | null | The converter provided by Kafka Connect to convert record value. |
+| `database.history.pulsar.topic` | false | false | null | The name of the database history topic where the connector writes and recovers DDL statements.
**Note: this topic is for internal use only and should not be used by consumers.** |
+| `database.history.pulsar.service.url` | false | false | null | Pulsar cluster service URL for history topic. |
+| `offset.storage.topic` | false | false | null | Record the last committed offsets that the connector successfully completes. By default, it's `topicNamespace + "/" + sourceName + "-debezium-offset-topic"`. eg. `persistent://public/default/debezium-mongodb-source-debezium-offset-topic` |
+| `json-with-envelope` | false | false | false | The`json-with-envelope` config is valid only for the JsonConverter. By default, the value is set to false. When the `json-with-envelope` value is set to false, the consumer uses the schema `Schema.KeyValue(Schema.AUTO_CONSUME(), Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`, and the message only consists of the payload. When the `json-with-envelope` value is set to true, the consumer uses the schema `Schema.KeyValue(Schema.BYTES, Schema.BYTES)`, and the message consists of the schema and the payload. |
For more configuration properties, plesae see [Debezium MongoDB connector configuration properties](https://debezium.io/documentation/reference/1.9/connectors/mongodb.html#mongodb-connector-properties)
diff --git a/connectors/debezium-mssql-source/v3.1.1.1/debezium-mssql-source.md b/connectors/debezium-mssql-source/v3.1.1.1/debezium-mssql-source.md
index 3e521fb72..3f43a0a99 100644
--- a/connectors/debezium-mssql-source/v3.1.1.1/debezium-mssql-source.md
+++ b/connectors/debezium-mssql-source/v3.1.1.1/debezium-mssql-source.md
@@ -122,23 +122,23 @@ key:[eyJpZCI6MTB9], properties:[], content:{"before":null,"after":{"id":1,"name"
## Configuration Properties
The configuration of Debezium source connector has the following properties.
-| Name | Required | Default | Description |
-|---------------------------------------|----------|---------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| `database.hostname` | true | null | The address of a database server. |
-| `database.port` | true | null | The port number of a database server. |
-| `database.user` | true | null | The name of a database user that has the required privileges. |
-| `database.password` | true | null | The password for a database user that has the required privileges. |
-| `database.dbname` | true | null | The database.dbname parameter in Debezium configuration is used to specify the name of the specific database that the connector should connect to. |
-| `database.server.name` | true | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. |
-| `database.server.id` | false | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. |
-| `database.whitelist` | false | null | A list of all databases hosted by this server which is monitored by the connector.
This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. |
-| `key.converter` | false | null | The converter provided by Kafka Connect to convert record key. |
-| `value.converter` | false | null | The converter provided by Kafka Connect to convert record value. |
-| `database.history` | false | null | The name of the database history class. |
-| `database.history.pulsar.topic` | false | null | The name of the database history topic where the connector writes and recovers DDL statements.
**Note: this topic is for internal use only and should not be used by consumers.** |
-| `database.history.pulsar.service.url` | false | null | Pulsar cluster service URL for history topic. |
-| `pulsar.service.url` | false | null | Pulsar cluster service URL. |
-| `offset.storage.topic` | false | null | Record the last committed offsets that the connector successfully completes. |
+| Name | Required | Sensitive | Default | Description |
+|---------------------------------------|----------|-----------|---------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `database.hostname` | true | false | null | The address of a database server. |
+| `database.port` | true | false | null | The port number of a database server. |
+| `database.user` | true | true | null | The name of a database user that has the required privileges. |
+| `database.password` | true | true | null | The password for a database user that has the required privileges. |
+| `database.dbname` | true | false | null | The database.dbname parameter in Debezium configuration is used to specify the name of the specific database that the connector should connect to. |
+| `database.server.name` | true | false | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. |
+| `database.server.id` | false | false | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. |
+| `database.whitelist` | false | false | null | A list of all databases hosted by this server which is monitored by the connector.
This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. |
+| `key.converter` | false | false | null | The converter provided by Kafka Connect to convert record key. |
+| `value.converter` | false | false | null | The converter provided by Kafka Connect to convert record value. |
+| `database.history` | false | false | null | The name of the database history class. |
+| `database.history.pulsar.topic` | false | false | null | The name of the database history topic where the connector writes and recovers DDL statements.
**Note: this topic is for internal use only and should not be used by consumers.** |
+| `database.history.pulsar.service.url` | false | false | null | Pulsar cluster service URL for history topic. |
+| `pulsar.service.url` | false | false | null | Pulsar cluster service URL. |
+| `offset.storage.topic` | false | false | null | Record the last committed offsets that the connector successfully completes. |
## Advanced features
diff --git a/connectors/debezium-mysql-source/v3.1.1.1/debezium-mysql-source.md b/connectors/debezium-mysql-source/v3.1.1.1/debezium-mysql-source.md
index 653d9ec4c..f379f9099 100644
--- a/connectors/debezium-mysql-source/v3.1.1.1/debezium-mysql-source.md
+++ b/connectors/debezium-mysql-source/v3.1.1.1/debezium-mysql-source.md
@@ -127,23 +127,23 @@ key:[eyJpZCI6MX0=], properties:[], content:{"before":{"id":1,"first_name":"mysql
## Configuration Properties
The configuration of Debezium source connector has the following properties.
-| Name | Required | Default | Description |
-|---------------------------------------|----------|---------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| `database.hostname` | true | null | The address of a database server. |
-| `database.port` | true | null | The port number of a database server. |
-| `database.user` | true | null | The name of a database user that has the required privileges. |
-| `database.password` | true | null | The password for a database user that has the required privileges. |
-| `database.dbname` | true | null | The database.dbname parameter in Debezium configuration is used to specify the name of the specific database that the connector should connect to. |
-| `database.server.name` | true | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. |
-| `database.server.id` | false | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. |
-| `database.whitelist` | false | null | A list of all databases hosted by this server which is monitored by the connector.
This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. |
-| `key.converter` | false | null | The converter provided by Kafka Connect to convert record key. |
-| `value.converter` | false | null | The converter provided by Kafka Connect to convert record value. |
-| `database.history` | false | null | The name of the database history class. |
-| `database.history.pulsar.topic` | false | null | The name of the database history topic where the connector writes and recovers DDL statements.
**Note: this topic is for internal use only and should not be used by consumers.** |
-| `database.history.pulsar.service.url` | false | null | Pulsar cluster service URL for history topic. |
-| `pulsar.service.url` | false | null | Pulsar cluster service URL. |
-| `offset.storage.topic` | false | null | Record the last committed offsets that the connector successfully completes. |
+| Name | Required | Sensitive | Default | Description |
+|---------------------------------------|----------|-----------|---------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `database.hostname` | true | false | null | The address of a database server. |
+| `database.port` | true | false | null | The port number of a database server. |
+| `database.user` | true | true | null | The name of a database user that has the required privileges. |
+| `database.password` | true | true | null | The password for a database user that has the required privileges. |
+| `database.dbname` | true | false | null | The database.dbname parameter in Debezium configuration is used to specify the name of the specific database that the connector should connect to. |
+| `database.server.name` | true | false | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. |
+| `database.server.id` | false | false | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. |
+| `database.whitelist` | false | false | null | A list of all databases hosted by this server which is monitored by the connector.
This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. |
+| `key.converter` | false | false | null | The converter provided by Kafka Connect to convert record key. |
+| `value.converter` | false | false | null | The converter provided by Kafka Connect to convert record value. |
+| `database.history` | false | false | null | The name of the database history class. |
+| `database.history.pulsar.topic` | false | false | null | The name of the database history topic where the connector writes and recovers DDL statements.
**Note: this topic is for internal use only and should not be used by consumers.** |
+| `database.history.pulsar.service.url` | false | false | null | Pulsar cluster service URL for history topic. |
+| `pulsar.service.url` | false | false | null | Pulsar cluster service URL. |
+| `offset.storage.topic` | false | false | null | Record the last committed offsets that the connector successfully completes. |
## Advanced features
diff --git a/connectors/debezium-postgres-source/v3.1.1.1/debezium-postgres-source.md b/connectors/debezium-postgres-source/v3.1.1.1/debezium-postgres-source.md
index a40e333eb..b35656005 100644
--- a/connectors/debezium-postgres-source/v3.1.1.1/debezium-postgres-source.md
+++ b/connectors/debezium-postgres-source/v3.1.1.1/debezium-postgres-source.md
@@ -127,24 +127,24 @@ key:[eyJpZCI6M30=], properties:[], content:{"before":{"id":1,"first_name":"pg-io
## Configuration Properties
The configuration of Debezium source connector has the following properties.
-| Name | Required | Default | Description |
-|---------------------------------------|----------|---------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| `database.hostname` | true | null | The address of a database server. |
-| `database.port` | true | null | The port number of a database server. |
-| `database.user` | true | null | The name of a database user that has the required privileges. |
-| `database.password` | true | null | The password for a database user that has the required privileges. |
-| `database.dbname` | true | null | The database.dbname parameter in Debezium configuration is used to specify the name of the specific database that the connector should connect to. |
-| `plugin.name` | true | null | The plugin.name parameter in Debezium configuration is used to specify the logical decoding output plugin installed on the PostgreSQL server that the connector should use: `decoderbufs`, `wal2json`, `pgoutput` |
-| `database.server.name` | true | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. |
-| `database.server.id` | false | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. |
-| `database.whitelist` | false | null | A list of all databases hosted by this server which is monitored by the connector.
This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. |
-| `key.converter` | false | null | The converter provided by Kafka Connect to convert record key. |
-| `value.converter` | false | null | The converter provided by Kafka Connect to convert record value. |
-| `database.history` | false | null | The name of the database history class. |
-| `database.history.pulsar.topic` | false | null | The name of the database history topic where the connector writes and recovers DDL statements.
**Note: this topic is for internal use only and should not be used by consumers.** |
-| `database.history.pulsar.service.url` | false | null | Pulsar cluster service URL for history topic. |
-| `pulsar.service.url` | false | null | Pulsar cluster service URL. |
-| `offset.storage.topic` | false | null | Record the last committed offsets that the connector successfully completes. |
+| Name | Required | Sensitive | Default | Description |
+|---------------------------------------|----------|-----------|---------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `database.hostname` | true | false | null | The address of a database server. |
+| `database.port` | true | false | null | The port number of a database server. |
+| `database.user` | true | true | null | The name of a database user that has the required privileges. |
+| `database.password` | true | true | null | The password for a database user that has the required privileges. |
+| `database.dbname` | true | false | null | The database.dbname parameter in Debezium configuration is used to specify the name of the specific database that the connector should connect to. |
+| `plugin.name` | true | false | null | The plugin.name parameter in Debezium configuration is used to specify the logical decoding output plugin installed on the PostgreSQL server that the connector should use: `decoderbufs`, `wal2json`, `pgoutput` |
+| `database.server.name` | true | false | null | The logical name of a database server/cluster, which forms a namespace and it is used in all the names of Kafka topics to which the connector writes, the Kafka Connect schema names, and the namespaces of the corresponding Avro schema when the Avro Connector is used. |
+| `database.server.id` | false | false | null | The connector’s identifier that must be unique within a database cluster and similar to the database’s server-id configuration property. |
+| `database.whitelist` | false | false | null | A list of all databases hosted by this server which is monitored by the connector.
This is optional, and there are other properties for listing databases and tables to include or exclude from monitoring. |
+| `key.converter` | false | false | null | The converter provided by Kafka Connect to convert record key. |
+| `value.converter` | false | false | null | The converter provided by Kafka Connect to convert record value. |
+| `database.history` | false | false | null | The name of the database history class. |
+| `database.history.pulsar.topic` | false | false | null | The name of the database history topic where the connector writes and recovers DDL statements.
**Note: this topic is for internal use only and should not be used by consumers.** |
+| `database.history.pulsar.service.url` | false | false | null | Pulsar cluster service URL for history topic. |
+| `pulsar.service.url` | false | false | null | Pulsar cluster service URL. |
+| `offset.storage.topic` | false | false | null | Record the last committed offsets that the connector successfully completes. |
## Advanced features
diff --git a/connectors/elasticsearch-sink/v3.1.1.1/elasticsearch-sink.md b/connectors/elasticsearch-sink/v3.1.1.1/elasticsearch-sink.md
index 02656ecfe..d52e5b306 100644
--- a/connectors/elasticsearch-sink/v3.1.1.1/elasticsearch-sink.md
+++ b/connectors/elasticsearch-sink/v3.1.1.1/elasticsearch-sink.md
@@ -125,41 +125,41 @@ curl -s http://localhost:9200/my_index/_search
This table outlines the properties of an Elasticsearch sink connector.
-| Name | Type | Required | Default | Description |
-|--------------------------------|------------------------------------------------------|----------|--------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| `elasticSearchUrl` | String | true | " " (empty string) | The URL of elastic search cluster to which the connector connects. |
-| `indexName` | String | false | " " (empty string) | The index name to which the connector writes messages. The default value is the topic name. It accepts date formats in the name to support event time based index with the pattern `%{+}`. For example, suppose the event time of the record is 1645182000000L, the indexName is `logs-%{+yyyy-MM-dd}`, then the formatted index name would be `logs-2022-02-18`. |
-| `schemaEnable` | Boolean | false | false | Turn on the Schema Aware mode. |
-| `createIndexIfNeeded` | Boolean | false | false | Manage index if missing. |
-| `maxRetries` | Integer | false | 1 | The maximum number of retries for elasticsearch requests. Use -1 to disable it. |
-| `retryBackoffInMs` | Integer | false | 100 | The base time to wait when retrying an Elasticsearch request (in milliseconds). |
-| `maxRetryTimeInSec` | Integer | false | 86400 | The maximum retry time interval in seconds for retrying an elasticsearch request. |
-| `bulkEnabled` | Boolean | false | false | Enable the elasticsearch bulk processor to flush write requests based on the number or size of requests, or after a given period. |
-| `bulkActions` | Integer | false | 1000 | The maximum number of actions per elasticsearch bulk request. Use -1 to disable it. |
-| `bulkSizeInMb` | Integer | false | 5 | The maximum size in megabytes of elasticsearch bulk requests. Use -1 to disable it. |
-| `bulkConcurrentRequests` | Integer | false | 0 | The maximum number of in flight elasticsearch bulk requests. The default 0 allows the execution of a single request. A value of 1 means 1 concurrent request is allowed to be executed while accumulating new bulk requests. |
-| `bulkFlushIntervalInMs` | Long | false | 1000 | The maximum period of time to wait for flushing pending writes when bulk writes are enabled. -1 or zero means the scheduled flushing is disabled. |
-| `compressionEnabled` | Boolean | false | false | Enable elasticsearch request compression. |
-| `connectTimeoutInMs` | Integer | false | 5000 | The elasticsearch client connection timeout in milliseconds. |
-| `connectionRequestTimeoutInMs` | Integer | false | 1000 | The time in milliseconds for getting a connection from the elasticsearch connection pool. |
-| `connectionIdleTimeoutInMs` | Integer | false | 5 | Idle connection timeout to prevent a read timeout. |
-| `keyIgnore` | Boolean | false | true | Whether to ignore the record key to build the Elasticsearch document `_id`. If primaryFields is defined, the connector extract the primary fields from the payload to build the document `_id` If no primaryFields are provided, elasticsearch auto generates a random document `_id`. |
-| `primaryFields` | String | false | "id" | The comma separated ordered list of field names used to build the Elasticsearch document `_id` from the record value. If this list is a singleton, the field is converted as a string. If this list has 2 or more fields, the generated `_id` is a string representation of a JSON array of the field values. |
-| `nullValueAction` | enum (IGNORE,DELETE,FAIL) | false | IGNORE | How to handle records with null values, possible options are IGNORE, DELETE or FAIL. Default is IGNORE the message. |
-| `malformedDocAction` | enum (IGNORE,WARN,FAIL) | false | FAIL | How to handle elasticsearch rejected documents due to some malformation. Possible options are IGNORE, DELETE or FAIL. Default is FAIL the Elasticsearch document. |
-| `stripNulls` | Boolean | false | true | If stripNulls is false, elasticsearch _source includes 'null' for empty fields (for example {"foo": null}), otherwise null fields are stripped. |
-| `socketTimeoutInMs` | Integer | false | 60000 | The socket timeout in milliseconds waiting to read the elasticsearch response. |
-| `typeName` | String | false | "_doc" | The type name to which the connector writes messages to.
The value should be set explicitly to a valid type name other than "_doc" for Elasticsearch version before 6.2, and left to default otherwise. |
-| `indexNumberOfShards` | int | false | 1 | The number of shards of the index. |
-| `indexNumberOfReplicas` | int | false | 1 | The number of replicas of the index. |
-| `username` | String | false | " " (empty string) | The username used by the connector to connect to the elastic search cluster.
If `username` is set, then `password` should also be provided. |
-| `password` | String | false | " " (empty string) | The password used by the connector to connect to the elastic search cluster.
If `username` is set, then `password` should also be provided. |
-| `ssl` | ElasticSearchSslConfig | false | | Configuration for TLS encrypted communication |
-| `compatibilityMode` | enum (AUTO,ELASTICSEARCH,ELASTICSEARCH_7,OPENSEARCH) | false | AUTO | Specify compatibility mode with the ElasticSearch cluster. `AUTO` value will try to auto detect the correct compatibility mode to use. Use `ELASTICSEARCH_7` if the target cluster is running ElasticSearch 7 or prior. Use `ELASTICSEARCH` if the target cluster is running ElasticSearch 8 or higher. Use `OPENSEARCH` if the target cluster is running OpenSearch. |
-| `token` | String | false | " " (empty string) | The token used by the connector to connect to the ElasticSearch cluster. Only one between basic/token/apiKey authentication mode must be configured. |
-| `apiKey` | String | false | " " (empty string) | The apiKey used by the connector to connect to the ElasticSearch cluster. Only one between basic/token/apiKey authentication mode must be configured. |
-| `canonicalKeyFields` | Boolean | false | false | Whether to sort the key fields for JSON and Avro or not. If it is set to `true` and the record key schema is `JSON` or `AVRO`, the serialized object does not consider the order of properties. |
-| `stripNonPrintableCharacters` | Boolean | false | true | Whether to remove all non-printable characters from the document or not. If it is set to true, all non-printable characters are removed from the document. |
-| `idHashingAlgorithm` | enum(NONE,SHA256,SHA512) | false | NONE | Hashing algorithm to use for the document id. This is useful in order to be compliant with the ElasticSearch _id hard limit of 512 bytes. |
-| `conditionalIdHashing` | Boolean | false | false | This option only works if idHashingAlgorithm is set. If enabled, the hashing is performed only if the id is greater than 512 bytes otherwise the hashing is performed on each document in any case. |
-| `copyKeyFields` | Boolean | false | false | If the message key schema is AVRO or JSON, the message key fields are copied into the ElasticSearch document. |
+| Name | Type | Required | Sensitive | Default | Description |
+|--------------------------------|------------------------------------------------------|----------|-----------|--------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `elasticSearchUrl` | String | true | false | " " (empty string) | The URL of elastic search cluster to which the connector connects. |
+| `indexName` | String | false | false | " " (empty string) | The index name to which the connector writes messages. The default value is the topic name. It accepts date formats in the name to support event time based index with the pattern `%{+}`. For example, suppose the event time of the record is 1645182000000L, the indexName is `logs-%{+yyyy-MM-dd}`, then the formatted index name would be `logs-2022-02-18`. |
+| `schemaEnable` | Boolean | false | false | false | Turn on the Schema Aware mode. |
+| `createIndexIfNeeded` | Boolean | false | false | false | Manage index if missing. |
+| `maxRetries` | Integer | false | false | 1 | The maximum number of retries for elasticsearch requests. Use -1 to disable it. |
+| `retryBackoffInMs` | Integer | false | false | 100 | The base time to wait when retrying an Elasticsearch request (in milliseconds). |
+| `maxRetryTimeInSec` | Integer | false | false | 86400 | The maximum retry time interval in seconds for retrying an elasticsearch request. |
+| `bulkEnabled` | Boolean | false | false | false | Enable the elasticsearch bulk processor to flush write requests based on the number or size of requests, or after a given period. |
+| `bulkActions` | Integer | false | false | 1000 | The maximum number of actions per elasticsearch bulk request. Use -1 to disable it. |
+| `bulkSizeInMb` | Integer | false | false | 5 | The maximum size in megabytes of elasticsearch bulk requests. Use -1 to disable it. |
+| `bulkConcurrentRequests` | Integer | false | false | 0 | The maximum number of in flight elasticsearch bulk requests. The default 0 allows the execution of a single request. A value of 1 means 1 concurrent request is allowed to be executed while accumulating new bulk requests. |
+| `bulkFlushIntervalInMs` | Long | false | false | 1000 | The maximum period of time to wait for flushing pending writes when bulk writes are enabled. -1 or zero means the scheduled flushing is disabled. |
+| `compressionEnabled` | Boolean | false | false | false | Enable elasticsearch request compression. |
+| `connectTimeoutInMs` | Integer | false | false | 5000 | The elasticsearch client connection timeout in milliseconds. |
+| `connectionRequestTimeoutInMs` | Integer | false | false | 1000 | The time in milliseconds for getting a connection from the elasticsearch connection pool. |
+| `connectionIdleTimeoutInMs` | Integer | false | false | 5 | Idle connection timeout to prevent a read timeout. |
+| `keyIgnore` | Boolean | false | false | true | Whether to ignore the record key to build the Elasticsearch document `_id`. If primaryFields is defined, the connector extract the primary fields from the payload to build the document `_id` If no primaryFields are provided, elasticsearch auto generates a random document `_id`. |
+| `primaryFields` | String | false | false | "id" | The comma separated ordered list of field names used to build the Elasticsearch document `_id` from the record value. If this list is a singleton, the field is converted as a string. If this list has 2 or more fields, the generated `_id` is a string representation of a JSON array of the field values. |
+| `nullValueAction` | enum (IGNORE,DELETE,FAIL) | false | false | IGNORE | How to handle records with null values, possible options are IGNORE, DELETE or FAIL. Default is IGNORE the message. |
+| `malformedDocAction` | enum (IGNORE,WARN,FAIL) | false | false | FAIL | How to handle elasticsearch rejected documents due to some malformation. Possible options are IGNORE, DELETE or FAIL. Default is FAIL the Elasticsearch document. |
+| `stripNulls` | Boolean | false | false | true | If stripNulls is false, elasticsearch _source includes 'null' for empty fields (for example {"foo": null}), otherwise null fields are stripped. |
+| `socketTimeoutInMs` | Integer | false | false | 60000 | The socket timeout in milliseconds waiting to read the elasticsearch response. |
+| `typeName` | String | false | false | "_doc" | The type name to which the connector writes messages to.
The value should be set explicitly to a valid type name other than "_doc" for Elasticsearch version before 6.2, and left to default otherwise. |
+| `indexNumberOfShards` | int | false | false | 1 | The number of shards of the index. |
+| `indexNumberOfReplicas` | int | false | false | 1 | The number of replicas of the index. |
+| `username` | String | false | true | " " (empty string) | The username used by the connector to connect to the elastic search cluster.
If `username` is set, then `password` should also be provided. |
+| `password` | String | false | true | " " (empty string) | The password used by the connector to connect to the elastic search cluster.
If `username` is set, then `password` should also be provided. |
+| `ssl` | ElasticSearchSslConfig | false | false | | Configuration for TLS encrypted communication |
+| `compatibilityMode` | enum (AUTO,ELASTICSEARCH,ELASTICSEARCH_7,OPENSEARCH) | false | false | AUTO | Specify compatibility mode with the ElasticSearch cluster. `AUTO` value will try to auto detect the correct compatibility mode to use. Use `ELASTICSEARCH_7` if the target cluster is running ElasticSearch 7 or prior. Use `ELASTICSEARCH` if the target cluster is running ElasticSearch 8 or higher. Use `OPENSEARCH` if the target cluster is running OpenSearch. |
+| `token` | String | false | true | " " (empty string) | The token used by the connector to connect to the ElasticSearch cluster. Only one between basic/token/apiKey authentication mode must be configured. |
+| `apiKey` | String | false | true | " " (empty string) | The apiKey used by the connector to connect to the ElasticSearch cluster. Only one between basic/token/apiKey authentication mode must be configured. |
+| `canonicalKeyFields` | Boolean | false | false | false | Whether to sort the key fields for JSON and Avro or not. If it is set to `true` and the record key schema is `JSON` or `AVRO`, the serialized object does not consider the order of properties. |
+| `stripNonPrintableCharacters` | Boolean | false | false | true | Whether to remove all non-printable characters from the document or not. If it is set to true, all non-printable characters are removed from the document. |
+| `idHashingAlgorithm` | enum(NONE,SHA256,SHA512) | false | false | NONE | Hashing algorithm to use for the document id. This is useful in order to be compliant with the ElasticSearch _id hard limit of 512 bytes. |
+| `conditionalIdHashing` | Boolean | false | false | false | This option only works if idHashingAlgorithm is set. If enabled, the hashing is performed only if the id is greater than 512 bytes otherwise the hashing is performed on each document in any case. |
+| `copyKeyFields` | Boolean | false | false | false | If the message key schema is AVRO or JSON, the message key fields are copied into the ElasticSearch document. |
diff --git a/connectors/influxdb-sink/v3.1.1.1/influxdb-sink.md b/connectors/influxdb-sink/v3.1.1.1/influxdb-sink.md
index 872ee51de..0cfbd7670 100644
--- a/connectors/influxdb-sink/v3.1.1.1/influxdb-sink.md
+++ b/connectors/influxdb-sink/v3.1.1.1/influxdb-sink.md
@@ -31,31 +31,31 @@ The configuration of the InfluxDB sink connector has the following properties.
## Property
### InfluxDBv2
-| Name | Type|Required | Default | Description
-|------|----------|----------|---------|-------------|
-| `influxdbUrl` |String| true|" " (empty string) | The URL of the InfluxDB instance. |
-| `token` | String|true| " " (empty string) |The authentication token used to authenticate to InfluxDB. |
-| `organization` | String| true|" " (empty string) | The InfluxDB organization to write to. |
-| `bucket` |String| true | " " (empty string)| The InfluxDB bucket to write to. |
-| `precision` | String|false| ns | The timestamp precision for writing data to InfluxDB.
Below are the available options:ns
us
ms
s|
-| `logLevel` | String|false| NONE|The log level for InfluxDB request and response.
Below are the available options:NONE
BASIC
HEADERS
FULL|
-| `gzipEnable` | boolean|false | false | Whether to enable gzip or not. |
-| `batchTimeMs` |long|false| 1000L | The InfluxDB operation time in milliseconds. |
-| `batchSize` | int|false|200| The batch size of writing to InfluxDB. |
+| Name | Type | Required | Sensitive | Default | Description |
+|----------------|---------|----------|-----------|--------------------|-------------------------------------------------------------------------------------------------------------------------------------------|
+| `influxdbUrl` | String | true | false | " " (empty string) | The URL of the InfluxDB instance. |
+| `token` | String | true | true | " " (empty string) | The authentication token used to authenticate to InfluxDB. |
+| `organization` | String | true | false | " " (empty string) | The InfluxDB organization to write to. |
+| `bucket` | String | true | false | " " (empty string) | The InfluxDB bucket to write to. |
+| `precision` | String | false | false | ns | The timestamp precision for writing data to InfluxDB.
Below are the available options:ns
us
ms
s |
+| `logLevel` | String | false | false | NONE | The log level for InfluxDB request and response.
Below are the available options:NONE
BASIC
HEADERS
FULL |
+| `gzipEnable` | boolean | false | false | false | Whether to enable gzip or not. |
+| `batchTimeMs` | long | false | false | 1000L | The InfluxDB operation time in milliseconds. |
+| `batchSize` | int | false | false | 200 | The batch size of writing to InfluxDB. |
### InfluxDBv1
-| Name | Type|Required | Default | Description
-|------|----------|----------|---------|-------------|
-| `influxdbUrl` |String| true|" " (empty string) | The URL of the InfluxDB instance. |
-| `username` | String|false| " " (empty string) |The username used to authenticate to InfluxDB. |
-| `password` | String| false|" " (empty string) | The password used to authenticate to InfluxDB. |
-| `database` |String| true | " " (empty string)| The InfluxDB to which write messages. |
-| `consistencyLevel` | String|false|ONE | The consistency level for writing data to InfluxDB.
Below are the available options:ALL
ANY
ONE
QUORUM |
-| `logLevel` | String|false| NONE|The log level for InfluxDB request and response.
Below are the available options:NONE
BASIC
HEADERS
FULL|
-| `retentionPolicy` | String|false| autogen| The retention policy for InfluxDB. |
-| `gzipEnable` | boolean|false | false | Whether to enable gzip or not. |
-| `batchTimeMs` |long|false| 1000L | The InfluxDB operation time in milliseconds. |
-| `batchSize` | int|false|200| The batch size of writing to InfluxDB. |
+| Name | Type | Required | Sensitive | Default | Description |
+|--------------------|---------|----------|-----------|--------------------|-------------------------------------------------------------------------------------------------------------------------------------------|
+| `influxdbUrl` | String | true | false | " " (empty string) | The URL of the InfluxDB instance. |
+| `username` | String | false | true | " " (empty string) | The username used to authenticate to InfluxDB. |
+| `password` | String | false | true | " " (empty string) | The password used to authenticate to InfluxDB. |
+| `database` | String | true | false | " " (empty string) | The InfluxDB to which write messages. |
+| `consistencyLevel` | String | false | false | ONE | The consistency level for writing data to InfluxDB.
Below are the available options:ALL
ANY
ONE
QUORUM |
+| `logLevel` | String | false | false | NONE | The log level for InfluxDB request and response.
Below are the available options:NONE
BASIC
HEADERS
FULL |
+| `retentionPolicy` | String | false | false | autogen | The retention policy for InfluxDB. |
+| `gzipEnable` | boolean | false | false | false | Whether to enable gzip or not. |
+| `batchTimeMs` | long | false | false | 1000L | The InfluxDB operation time in milliseconds. |
+| `batchSize` | int | false | false | 200 | The batch size of writing to InfluxDB. |
## Example
Before using the InfluxDB sink connector, you need to create a configuration file through one of the following methods.
diff --git a/connectors/jdbc-clickhouse-sink/v3.1.1.1/jdbc-clickhouse-sink.md b/connectors/jdbc-clickhouse-sink/v3.1.1.1/jdbc-clickhouse-sink.md
index 5a40de9b8..37badda0c 100644
--- a/connectors/jdbc-clickhouse-sink/v3.1.1.1/jdbc-clickhouse-sink.md
+++ b/connectors/jdbc-clickhouse-sink/v3.1.1.1/jdbc-clickhouse-sink.md
@@ -31,16 +31,16 @@ The configuration of the JDBC sink connector has the following properties.
## Property
-| Name | Type|Required | Default | Description
-|------|----------|----------|---------|-------------|
-| `userName` | String|false | " " (empty string) | The username used to connect to the database specified by `jdbcUrl`.
**Note: `userName` is case-sensitive.**|
-| `password` | String|false | " " (empty string)| The password used to connect to the database specified by `jdbcUrl`.
**Note: `password` is case-sensitive.**|
-| `jdbcUrl` | String|true | " " (empty string) | The JDBC URL of the database to which the connector connects. |
-| `tableName` | String|true | " " (empty string) | The name of the table to which the connector writes. |
-| `nonKey` | String|false | " " (empty string) | A comma-separated list contains the fields used in updating events. |
-| `key` | String|false | " " (empty string) | A comma-separated list contains the fields used in `where` condition of updating and deleting events. |
-| `timeoutMs` | int| false|500 | The JDBC operation timeout in milliseconds. |
-| `batchSize` | int|false | 200 | The batch size of updates made to the database. |
+| Name | Type | Required | Sensitive | Default | Description |
+|-------------|--------|----------|-----------|--------------------|----------------------------------------------------------------------------------------------------------------------|
+| `userName` | String | false | true | " " (empty string) | The username used to connect to the database specified by `jdbcUrl`.
**Note: `userName` is case-sensitive.** |
+| `password` | String | false | true | " " (empty string) | The password used to connect to the database specified by `jdbcUrl`.
**Note: `password` is case-sensitive.** |
+| `jdbcUrl` | String | true | false | " " (empty string) | The JDBC URL of the database to which the connector connects. |
+| `tableName` | String | true | false | " " (empty string) | The name of the table to which the connector writes. |
+| `nonKey` | String | false | false | " " (empty string) | A comma-separated list contains the fields used in updating events. |
+| `key` | String | false | false | " " (empty string) | A comma-separated list contains the fields used in `where` condition of updating and deleting events. |
+| `timeoutMs` | int | false | false | 500 | The JDBC operation timeout in milliseconds. |
+| `batchSize` | int | false | false | 200 | The batch size of updates made to the database. |
## Example
diff --git a/connectors/jdbc-mariadb-sink/v3.1.1.1/jdbc-mariadb-sink.md b/connectors/jdbc-mariadb-sink/v3.1.1.1/jdbc-mariadb-sink.md
index db2e3e8d4..914743525 100644
--- a/connectors/jdbc-mariadb-sink/v3.1.1.1/jdbc-mariadb-sink.md
+++ b/connectors/jdbc-mariadb-sink/v3.1.1.1/jdbc-mariadb-sink.md
@@ -31,16 +31,16 @@ The configuration of the JDBC sink connector has the following properties.
## Property
-| Name | Type|Required | Default | Description
-|------|----------|----------|---------|-------------|
-| `userName` | String|false | " " (empty string) | The username used to connect to the database specified by `jdbcUrl`.
**Note: `userName` is case-sensitive.**|
-| `password` | String|false | " " (empty string)| The password used to connect to the database specified by `jdbcUrl`.
**Note: `password` is case-sensitive.**|
-| `jdbcUrl` | String|true | " " (empty string) | The JDBC URL of the database to which the connector connects. |
-| `tableName` | String|true | " " (empty string) | The name of the table to which the connector writes. |
-| `nonKey` | String|false | " " (empty string) | A comma-separated list contains the fields used in updating events. |
-| `key` | String|false | " " (empty string) | A comma-separated list contains the fields used in `where` condition of updating and deleting events. |
-| `timeoutMs` | int| false|500 | The JDBC operation timeout in milliseconds. |
-| `batchSize` | int|false | 200 | The batch size of updates made to the database. |
+| Name | Type | Required | Sensitive | Default | Description |
+|-------------|--------|----------|-----------|--------------------|----------------------------------------------------------------------------------------------------------------------|
+| `userName` | String | false | true | " " (empty string) | The username used to connect to the database specified by `jdbcUrl`.
**Note: `userName` is case-sensitive.** |
+| `password` | String | false | true | " " (empty string) | The password used to connect to the database specified by `jdbcUrl`.
**Note: `password` is case-sensitive.** |
+| `jdbcUrl` | String | true | false | " " (empty string) | The JDBC URL of the database to which the connector connects. |
+| `tableName` | String | true | false | " " (empty string) | The name of the table to which the connector writes. |
+| `nonKey` | String | false | false | " " (empty string) | A comma-separated list contains the fields used in updating events. |
+| `key` | String | false | false | " " (empty string) | A comma-separated list contains the fields used in `where` condition of updating and deleting events. |
+| `timeoutMs` | int | false | false | 500 | The JDBC operation timeout in milliseconds. |
+| `batchSize` | int | false | false | 200 | The batch size of updates made to the database. |
## Example
diff --git a/connectors/jdbc-postgres-sink/v3.1.1.1/jdbc-postgres-sink.md b/connectors/jdbc-postgres-sink/v3.1.1.1/jdbc-postgres-sink.md
index 6d3b568ae..8376e973c 100644
--- a/connectors/jdbc-postgres-sink/v3.1.1.1/jdbc-postgres-sink.md
+++ b/connectors/jdbc-postgres-sink/v3.1.1.1/jdbc-postgres-sink.md
@@ -31,16 +31,16 @@ The configuration of the JDBC sink connector has the following properties.
## Property
-| Name | Type|Required | Default | Description
-|------|----------|----------|---------|-------------|
-| `userName` | String|false | " " (empty string) | The username used to connect to the database specified by `jdbcUrl`.
**Note: `userName` is case-sensitive.**|
-| `password` | String|false | " " (empty string)| The password used to connect to the database specified by `jdbcUrl`.
**Note: `password` is case-sensitive.**|
-| `jdbcUrl` | String|true | " " (empty string) | The JDBC URL of the database to which the connector connects. |
-| `tableName` | String|true | " " (empty string) | The name of the table to which the connector writes. |
-| `nonKey` | String|false | " " (empty string) | A comma-separated list contains the fields used in updating events. |
-| `key` | String|false | " " (empty string) | A comma-separated list contains the fields used in `where` condition of updating and deleting events. |
-| `timeoutMs` | int| false|500 | The JDBC operation timeout in milliseconds. |
-| `batchSize` | int|false | 200 | The batch size of updates made to the database. |
+| Name | Type | Required | Sensitive | Default | Description |
+|-------------|--------|----------|-----------|--------------------|----------------------------------------------------------------------------------------------------------------------|
+| `userName` | String | false | true | " " (empty string) | The username used to connect to the database specified by `jdbcUrl`.
**Note: `userName` is case-sensitive.** |
+| `password` | String | false | true | " " (empty string) | The password used to connect to the database specified by `jdbcUrl`.
**Note: `password` is case-sensitive.** |
+| `jdbcUrl` | String | true | false | " " (empty string) | The JDBC URL of the database to which the connector connects. |
+| `tableName` | String | true | false | " " (empty string) | The name of the table to which the connector writes. |
+| `nonKey` | String | false | false | " " (empty string) | A comma-separated list contains the fields used in updating events. |
+| `key` | String | false | false | " " (empty string) | A comma-separated list contains the fields used in `where` condition of updating and deleting events. |
+| `timeoutMs` | int | false | false | 500 | The JDBC operation timeout in milliseconds. |
+| `batchSize` | int | false | false | 200 | The batch size of updates made to the database. |
## Example
diff --git a/connectors/jdbc-sqlite-sink/v3.1.1.1/jdbc-sqlite-sink.md b/connectors/jdbc-sqlite-sink/v3.1.1.1/jdbc-sqlite-sink.md
index db2e3e8d4..914743525 100644
--- a/connectors/jdbc-sqlite-sink/v3.1.1.1/jdbc-sqlite-sink.md
+++ b/connectors/jdbc-sqlite-sink/v3.1.1.1/jdbc-sqlite-sink.md
@@ -31,16 +31,16 @@ The configuration of the JDBC sink connector has the following properties.
## Property
-| Name | Type|Required | Default | Description
-|------|----------|----------|---------|-------------|
-| `userName` | String|false | " " (empty string) | The username used to connect to the database specified by `jdbcUrl`.
**Note: `userName` is case-sensitive.**|
-| `password` | String|false | " " (empty string)| The password used to connect to the database specified by `jdbcUrl`.
**Note: `password` is case-sensitive.**|
-| `jdbcUrl` | String|true | " " (empty string) | The JDBC URL of the database to which the connector connects. |
-| `tableName` | String|true | " " (empty string) | The name of the table to which the connector writes. |
-| `nonKey` | String|false | " " (empty string) | A comma-separated list contains the fields used in updating events. |
-| `key` | String|false | " " (empty string) | A comma-separated list contains the fields used in `where` condition of updating and deleting events. |
-| `timeoutMs` | int| false|500 | The JDBC operation timeout in milliseconds. |
-| `batchSize` | int|false | 200 | The batch size of updates made to the database. |
+| Name | Type | Required | Sensitive | Default | Description |
+|-------------|--------|----------|-----------|--------------------|----------------------------------------------------------------------------------------------------------------------|
+| `userName` | String | false | true | " " (empty string) | The username used to connect to the database specified by `jdbcUrl`.
**Note: `userName` is case-sensitive.** |
+| `password` | String | false | true | " " (empty string) | The password used to connect to the database specified by `jdbcUrl`.
**Note: `password` is case-sensitive.** |
+| `jdbcUrl` | String | true | false | " " (empty string) | The JDBC URL of the database to which the connector connects. |
+| `tableName` | String | true | false | " " (empty string) | The name of the table to which the connector writes. |
+| `nonKey` | String | false | false | " " (empty string) | A comma-separated list contains the fields used in updating events. |
+| `key` | String | false | false | " " (empty string) | A comma-separated list contains the fields used in `where` condition of updating and deleting events. |
+| `timeoutMs` | int | false | false | 500 | The JDBC operation timeout in milliseconds. |
+| `batchSize` | int | false | false | 200 | The batch size of updates made to the database. |
## Example
diff --git a/connectors/kinesis-sink/v3.1.1.1/kinesis-sink.md b/connectors/kinesis-sink/v3.1.1.1/kinesis-sink.md
index d798fd195..433ebe00a 100644
--- a/connectors/kinesis-sink/v3.1.1.1/kinesis-sink.md
+++ b/connectors/kinesis-sink/v3.1.1.1/kinesis-sink.md
@@ -106,19 +106,19 @@ You can use the AWS Kinesis `Data Viewer` to view the data. ![](/images/connecto
This table outlines the properties of an AWS Kinesis sink connector.
-| Name | Type | Required | Default | Description |
-|-----------------------------|---------------|----------|--------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| `awsKinesisStreamName` | String | true | " " (empty string) | The Kinesis stream name. |
-| `awsRegion` | String | true | " " (empty string) | The AWS Kinesis [region](https://www.aws-services.info/regions.html).
**Example:**
us-west-1, us-west-2. |
-| `awsCredentialPluginName` | String | false | " " (empty string) | The fully-qualified class name of implementation of [AwsCredentialProviderPlugin](https://github.com/apache/pulsar/blob/master/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java). Please refer to [Configure AwsCredentialProviderPlugin](###Configure AwsCredentialProviderPlugin) |
-| `awsCredentialPluginParam` | String | false | " " (empty string) | The JSON parameter to initialize `awsCredentialsProviderPlugin`. Please refer to [Configure AwsCredentialProviderPlugin](###Configure AwsCredentialProviderPlugin) |
-| `awsEndpoint` | String | false | " " (empty string) | A custom Kinesis endpoint. For more information, see [AWS documentation](https://docs.aws.amazon.com/general/latest/gr/rande.html). |
-| `retainOrdering` | Boolean | false | false | Whether Pulsar connectors retain the ordering when moving messages from Pulsar to Kinesis. |
-| `messageFormat` | MessageFormat | false | ONLY_RAW_PAYLOAD | Message format in which Kinesis sink converts Pulsar messages and publishes them to Kinesis streams.
Available options include:
`ONLY_RAW_PAYLOAD`: Kinesis sink directly publishes Pulsar message payload as a message into the configured Kinesis stream.
`FULL_MESSAGE_IN_JSON`: Kinesis sink creates a JSON payload with Pulsar message payload, properties, and encryptionCtx, and publishes JSON payload into the configured Kinesis stream.
`FULL_MESSAGE_IN_FB`: Kinesis sink creates a flatbuffers serialized payload with Pulsar message payload, properties, and encryptionCtx, and publishes flatbuffers payload into the configured Kinesis stream.
`FULL_MESSAGE_IN_JSON_EXPAND_VALUE`: Kinesis sink sends a JSON structure containing the record topic name, key, payload, properties, and event time. The record schema is used to convert the value to JSON. |
-| `jsonIncludeNonNulls` | Boolean | false | true | Only the properties with non-null values are included when the message format is `FULL_MESSAGE_IN_JSON_EXPAND_VALUE`. |
-| `jsonFlatten` | Boolean | false | false | When it is set to `true` and the message format is `FULL_MESSAGE_IN_JSON_EXPAND_VALUE`, the output JSON is flattened. |
-| `retryInitialDelayInMillis` | Long | false | 100 | The initial delay (in milliseconds) between retries. |
-| `retryMaxDelayInMillis` | Long | false | 60000 | The maximum delay(in milliseconds) between retries. |
+| Name | Type | Required | Sensitive | Default | Description |
+|-----------------------------|---------------|----------|-----------|--------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `awsKinesisStreamName` | String | true | false | " " (empty string) | The Kinesis stream name. |
+| `awsRegion` | String | true | false | " " (empty string) | The AWS Kinesis [region](https://www.aws-services.info/regions.html).
**Example:**
us-west-1, us-west-2. |
+| `awsCredentialPluginName` | String | false | false | " " (empty string) | The fully-qualified class name of implementation of [AwsCredentialProviderPlugin](https://github.com/apache/pulsar/blob/master/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java). Please refer to [Configure AwsCredentialProviderPlugin](###Configure AwsCredentialProviderPlugin) |
+| `awsCredentialPluginParam` | String | false | true | " " (empty string) | The JSON parameter to initialize `awsCredentialsProviderPlugin`. Please refer to [Configure AwsCredentialProviderPlugin](###Configure AwsCredentialProviderPlugin) |
+| `awsEndpoint` | String | false | false | " " (empty string) | A custom Kinesis endpoint. For more information, see [AWS documentation](https://docs.aws.amazon.com/general/latest/gr/rande.html). |
+| `retainOrdering` | Boolean | false | false | false | Whether Pulsar connectors retain the ordering when moving messages from Pulsar to Kinesis. |
+| `messageFormat` | MessageFormat | false | false | ONLY_RAW_PAYLOAD | Message format in which Kinesis sink converts Pulsar messages and publishes them to Kinesis streams.
Available options include:
`ONLY_RAW_PAYLOAD`: Kinesis sink directly publishes Pulsar message payload as a message into the configured Kinesis stream.
`FULL_MESSAGE_IN_JSON`: Kinesis sink creates a JSON payload with Pulsar message payload, properties, and encryptionCtx, and publishes JSON payload into the configured Kinesis stream.
`FULL_MESSAGE_IN_FB`: Kinesis sink creates a flatbuffers serialized payload with Pulsar message payload, properties, and encryptionCtx, and publishes flatbuffers payload into the configured Kinesis stream.
`FULL_MESSAGE_IN_JSON_EXPAND_VALUE`: Kinesis sink sends a JSON structure containing the record topic name, key, payload, properties, and event time. The record schema is used to convert the value to JSON. |
+| `jsonIncludeNonNulls` | Boolean | false | false | true | Only the properties with non-null values are included when the message format is `FULL_MESSAGE_IN_JSON_EXPAND_VALUE`. |
+| `jsonFlatten` | Boolean | false | false | false | When it is set to `true` and the message format is `FULL_MESSAGE_IN_JSON_EXPAND_VALUE`, the output JSON is flattened. |
+| `retryInitialDelayInMillis` | Long | false | false | 100 | The initial delay (in milliseconds) between retries. |
+| `retryMaxDelayInMillis` | Long | false | false | 60000 | The maximum delay(in milliseconds) between retries. |
### Configure AwsCredentialProviderPlugin
diff --git a/connectors/kinesis-source/v3.1.1.1/kinesis-source.md b/connectors/kinesis-source/v3.1.1.1/kinesis-source.md
index f3e831dee..241977c78 100644
--- a/connectors/kinesis-source/v3.1.1.1/kinesis-source.md
+++ b/connectors/kinesis-source/v3.1.1.1/kinesis-source.md
@@ -150,23 +150,23 @@ key:[myPartitionKey], properties:[=496436655431439836134428954504397640092247887
This table outlines the properties of an AWS Kinesis source connector.
-| Name | Type | Required | Default | Description |
-|----------------------------|-------------------------|----------|---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| `awsKinesisStreamName` | String | true | " " (empty string) | The Kinesis stream name. |
-| `awsRegion` | String | false | " " (empty string) | The AWS region.
**Example**
us-west-1, us-west-2. |
-| `awsCredentialPluginName` | String | false | " " (empty string) | The fully-qualified class name of implementation of [AwsCredentialProviderPlugin](https://github.com/apache/pulsar/blob/master/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java). For more information, see [Configure AwsCredentialProviderPlugin](###Configure AwsCredentialProviderPlugin). |
-| `awsCredentialPluginParam` | String | false | " " (empty string) | The JSON parameter to initialize `awsCredentialsProviderPlugin`. For more information, see [Configure AwsCredentialProviderPlugin](###Configure AwsCredentialProviderPlugin). |
-| `awsEndpoint` | String | false | " " (empty string) | The Kinesis end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). |
-| `dynamoEndpoint` | String | false | " " (empty string) | The Dynamo end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). |
-| `cloudwatchEndpoint` | String | false | " " (empty string) | The Cloudwatch end-point URL. For more information, see[Amazon documentation](https://docs.aws.amazon.com/general/latest/gr/rande.html). |
-| `applicationName` | String | false | Pulsar IO connector | The name of the Amazon Kinesis application, which will be used as the table name for DynamoDB. |
-| `initialPositionInStream` | InitialPositionInStream | false | LATEST | The position where the connector starts from.
Below are the available options:
`AT_TIMESTAMP`: start from the record at or after the specified timestamp.
`LATEST`: start after the most recent data record.
`TRIM_HORIZON`: start from the oldest available data record. |
-| `startAtTime` | Date | false | " " (empty string) | If set to `AT_TIMESTAMP`, it specifies the time point to start consumption. |
-| `checkpointInterval` | Long | false | 60000 | The frequency of the Kinesis stream checkpoint in milliseconds. |
-| `backoffTime` | Long | false | 3000 | The amount of time to delay between requests when the connector encounters a throttling exception from AWS Kinesis in milliseconds. |
-| `numRetries` | int | false | 3 | The number of re-attempts when the connector encounters an exception while trying to set a checkpoint. |
-| `receiveQueueSize` | int | false | 1000 | The maximum number of AWS records that can be buffered inside the connector.
Once the `receiveQueueSize` is reached, the connector does not consume any messages from Kinesis until some messages in the queue are successfully consumed. |
-| `useEnhancedFanOut` | boolean | false | true | If set to true, it uses Kinesis enhanced fan-out.
If set to false, it uses polling. |
+| Name | Type | Required | Sensitive | Default | Description |
+|----------------------------|-------------------------|----------|-----------|---------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `awsKinesisStreamName` | String | true | false | " " (empty string) | The Kinesis stream name. |
+| `awsRegion` | String | false | false | " " (empty string) | The AWS region.
**Example**
us-west-1, us-west-2. |
+| `awsCredentialPluginName` | String | false | false | " " (empty string) | The fully-qualified class name of implementation of [AwsCredentialProviderPlugin](https://github.com/apache/pulsar/blob/master/pulsar-io/aws/src/main/java/org/apache/pulsar/io/aws/AwsCredentialProviderPlugin.java). For more information, see [Configure AwsCredentialProviderPlugin](###Configure AwsCredentialProviderPlugin). |
+| `awsCredentialPluginParam` | String | false | true | " " (empty string) | The JSON parameter to initialize `awsCredentialsProviderPlugin`. For more information, see [Configure AwsCredentialProviderPlugin](###Configure AwsCredentialProviderPlugin). |
+| `awsEndpoint` | String | false | false | " " (empty string) | The Kinesis end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). |
+| `dynamoEndpoint` | String | false | false | " " (empty string) | The Dynamo end-point URL, which can be found at [here](https://docs.aws.amazon.com/general/latest/gr/rande.html). |
+| `cloudwatchEndpoint` | String | false | false | " " (empty string) | The Cloudwatch end-point URL. For more information, see[Amazon documentation](https://docs.aws.amazon.com/general/latest/gr/rande.html). |
+| `applicationName` | String | false | false | Pulsar IO connector | The name of the Amazon Kinesis application, which will be used as the table name for DynamoDB. |
+| `initialPositionInStream` | InitialPositionInStream | false | false | LATEST | The position where the connector starts from.
Below are the available options:
`AT_TIMESTAMP`: start from the record at or after the specified timestamp.
`LATEST`: start after the most recent data record.
`TRIM_HORIZON`: start from the oldest available data record. |
+| `startAtTime` | Date | false | false | " " (empty string) | If set to `AT_TIMESTAMP`, it specifies the time point to start consumption. |
+| `checkpointInterval` | Long | false | false | 60000 | The frequency of the Kinesis stream checkpoint in milliseconds. |
+| `backoffTime` | Long | false | false | 3000 | The amount of time to delay between requests when the connector encounters a throttling exception from AWS Kinesis in milliseconds. |
+| `numRetries` | int | false | false | 3 | The number of re-attempts when the connector encounters an exception while trying to set a checkpoint. |
+| `receiveQueueSize` | int | false | false | 1000 | The maximum number of AWS records that can be buffered inside the connector.
Once the `receiveQueueSize` is reached, the connector does not consume any messages from Kinesis until some messages in the queue are successfully consumed. |
+| `useEnhancedFanOut` | boolean | false | false | true | If set to true, it uses Kinesis enhanced fan-out.
If set to false, it uses polling. |
### Configure AwsCredentialProviderPlugin
diff --git a/connectors/mongodb-sink/v3.1.1.1/mongodb-sink.md b/connectors/mongodb-sink/v3.1.1.1/mongodb-sink.md
index 3a0d3fe05..a50a0cdee 100644
--- a/connectors/mongodb-sink/v3.1.1.1/mongodb-sink.md
+++ b/connectors/mongodb-sink/v3.1.1.1/mongodb-sink.md
@@ -29,13 +29,13 @@ The configuration of the MongoDB sink connector has the following properties.
## Property
-| Name | Type|Required | Default | Description
-|------|----------|----------|---------|-------------|
-| `mongoUri` | String| true| " " (empty string) | The MongoDB URI to which the connector connects.
For more information, see [connection string URI format](https://docs.mongodb.com/manual/reference/connection-string/). |
-| `database` | String| true| " " (empty string)| The database name to which the collection belongs. |
-| `collection` | String| true| " " (empty string)| The collection name to which the connector writes messages. |
-| `batchSize` | int|false|100 | The batch size of writing messages to collections. |
-| `batchTimeMs` |long|false|1000| The batch operation interval in milliseconds. |
+| Name | Type | Required | Sensitive | Default | Description |
+|---------------|--------|----------|-----------|--------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `mongoUri` | String | true | true | " " (empty string) | The MongoDB URI to which the connector connects.
For more information, see [connection string URI format](https://docs.mongodb.com/manual/reference/connection-string/). |
+| `database` | String | true | false | " " (empty string) | The database name to which the collection belongs. |
+| `collection` | String | true | false | " " (empty string) | The collection name to which the connector writes messages. |
+| `batchSize` | int | false | false | 100 | The batch size of writing messages to collections. |
+| `batchTimeMs` | long | false | false | 1000 | The batch operation interval in milliseconds. |
## Example
diff --git a/connectors/rabbitmq-sink/v3.1.1.1/rabbitmq-sink.md b/connectors/rabbitmq-sink/v3.1.1.1/rabbitmq-sink.md
index 1930b62a7..01a36cb97 100644
--- a/connectors/rabbitmq-sink/v3.1.1.1/rabbitmq-sink.md
+++ b/connectors/rabbitmq-sink/v3.1.1.1/rabbitmq-sink.md
@@ -31,22 +31,22 @@ The configuration of the RabbitMQ sink connector has the following properties.
## Property
-| Name | Type|Required | Default | Description
-|------|----------|----------|---------|-------------|
-| `connectionName` |String| true | " " (empty string) | The connection name. |
-| `host` | String| true | " " (empty string) | The RabbitMQ host. |
-| `port` | int |true | 5672 | The RabbitMQ port. |
-| `virtualHost` |String|true | / | The virtual host used to connect to RabbitMQ. |
-| `username` | String|false | guest | The username used to authenticate to RabbitMQ. |
-| `password` | String|false | guest | The password used to authenticate to RabbitMQ. |
-| `queueName` | String|true | " " (empty string) | The RabbitMQ queue name that messages should be read from or written to. |
-| `requestedChannelMax` | int|false | 0 | The initially requested maximum channel number.
0 means unlimited. |
-| `requestedFrameMax` | int|false |0 | The initially requested maximum frame size in octets.
0 means unlimited. |
-| `connectionTimeout` | int|false | 60000 | The timeout of TCP connection establishment in milliseconds.
0 means infinite. |
-| `handshakeTimeout` | int|false | 10000 | The timeout of AMQP0-9-1 protocol handshake in milliseconds. |
-| `requestedHeartbeat` | int|false | 60 | The exchange to publish messages. |
-| `exchangeName` | String|true | " " (empty string) | The maximum number of messages that the server delivers.
0 means unlimited. |
-| `prefetchGlobal` |String|true | " " (empty string) |The routing key used to publish messages. |
+| Name | Type | Required | Sensitive | Default | Description |
+|-----------------------|--------|----------|-----------|--------------------|----------------------------------------------------------------------------------------|
+| `connectionName` | String | true | false | " " (empty string) | The connection name. |
+| `host` | String | true | false | " " (empty string) | The RabbitMQ host. |
+| `port` | int | true | false | 5672 | The RabbitMQ port. |
+| `virtualHost` | String | true | false | / | The virtual host used to connect to RabbitMQ. |
+| `username` | String | false | true | guest | The username used to authenticate to RabbitMQ. |
+| `password` | String | false | true | guest | The password used to authenticate to RabbitMQ. |
+| `queueName` | String | true | false | " " (empty string) | The RabbitMQ queue name that messages should be read from or written to. |
+| `requestedChannelMax` | int | false | false | 0 | The initially requested maximum channel number.
0 means unlimited. |
+| `requestedFrameMax` | int | false | false | 0 | The initially requested maximum frame size in octets.
0 means unlimited. |
+| `connectionTimeout` | int | false | false | 60000 | The timeout of TCP connection establishment in milliseconds.
0 means infinite. |
+| `handshakeTimeout` | int | false | false | 10000 | The timeout of AMQP0-9-1 protocol handshake in milliseconds. |
+| `requestedHeartbeat` | int | false | false | 60 | The exchange to publish messages. |
+| `exchangeName` | String | true | false | " " (empty string) | The maximum number of messages that the server delivers.
0 means unlimited. |
+| `prefetchGlobal` | String | true | false | " " (empty string) | The routing key used to publish messages. |
## Example
diff --git a/connectors/rabbitmq-source/v3.1.1.1/rabbitmq-source.md b/connectors/rabbitmq-source/v3.1.1.1/rabbitmq-source.md
index 3f26ca420..1f4ce987f 100644
--- a/connectors/rabbitmq-source/v3.1.1.1/rabbitmq-source.md
+++ b/connectors/rabbitmq-source/v3.1.1.1/rabbitmq-source.md
@@ -29,22 +29,22 @@ The configuration of the RabbitMQ source connector has the following properties.
## Property
-| Name | Type|Required | Default | Description
-|------|----------|----------|---------|-------------|
-| `connectionName` |String| true | " " (empty string) | The connection name. |
-| `host` | String| true | " " (empty string) | The RabbitMQ host. |
-| `port` | int |true | 5672 | The RabbitMQ port. |
-| `virtualHost` |String|true | / | The virtual host used to connect to RabbitMQ. |
-| `username` | String|false | guest | The username used to authenticate to RabbitMQ. |
-| `password` | String|false | guest | The password used to authenticate to RabbitMQ. |
-| `queueName` | String|true | " " (empty string) | The RabbitMQ queue name that messages should be read from or written to. |
-| `requestedChannelMax` | int|false | 0 | The initially requested maximum channel number.
0 means unlimited. |
-| `requestedFrameMax` | int|false |0 | The initially requested maximum frame size in octets.
0 means unlimited. |
-| `connectionTimeout` | int|false | 60000 | The timeout of TCP connection establishment in milliseconds.
0 means infinite. |
-| `handshakeTimeout` | int|false | 10000 | The timeout of AMQP0-9-1 protocol handshake in milliseconds. |
-| `requestedHeartbeat` | int|false | 60 | The requested heartbeat timeout in seconds. |
-| `prefetchCount` | int|false | 0 | The maximum number of messages that the server delivers.
0 means unlimited. |
-| `prefetchGlobal` | boolean|false | false |Whether the setting should be applied to the entire channel rather than each consumer. |
+| Name | Type | Required | Sensitive | Default | Description |
+|-----------------------|---------|----------|-----------|--------------------|----------------------------------------------------------------------------------------|
+| `connectionName` | String | true | false | " " (empty string) | The connection name. |
+| `host` | String | true | false | " " (empty string) | The RabbitMQ host. |
+| `port` | int | true | false | 5672 | The RabbitMQ port. |
+| `virtualHost` | String | true | false | / | The virtual host used to connect to RabbitMQ. |
+| `username` | String | false | true | guest | The username used to authenticate to RabbitMQ. |
+| `password` | String | false | true | guest | The password used to authenticate to RabbitMQ. |
+| `queueName` | String | true | false | " " (empty string) | The RabbitMQ queue name that messages should be read from or written to. |
+| `requestedChannelMax` | int | false | false | 0 | The initially requested maximum channel number.
0 means unlimited. |
+| `requestedFrameMax` | int | false | false | 0 | The initially requested maximum frame size in octets.
0 means unlimited. |
+| `connectionTimeout` | int | false | false | 60000 | The timeout of TCP connection establishment in milliseconds.
0 means infinite. |
+| `handshakeTimeout` | int | false | false | 10000 | The timeout of AMQP0-9-1 protocol handshake in milliseconds. |
+| `requestedHeartbeat` | int | false | false | 60 | The requested heartbeat timeout in seconds. |
+| `prefetchCount` | int | false | false | 0 | The maximum number of messages that the server delivers.
0 means unlimited. |
+| `prefetchGlobal` | boolean | false | false | false | Whether the setting should be applied to the entire channel rather than each consumer. |
## Example
diff --git a/connectors/redis-sink/v3.1.1.1/redis-sink.md b/connectors/redis-sink/v3.1.1.1/redis-sink.md
index 687945136..698280eaa 100644
--- a/connectors/redis-sink/v3.1.1.1/redis-sink.md
+++ b/connectors/redis-sink/v3.1.1.1/redis-sink.md
@@ -29,20 +29,20 @@ The configuration of the Redis sink connector has the following properties.
## Property
-| Name | Type|Required | Default | Description
-|------|----------|----------|---------|-------------|
-| `redisHosts` |String|true|" " (empty string) | A comma-separated list of Redis hosts to connect to. |
-| `redisPassword` |String|false|" " (empty string) | The password used to connect to Redis. |
-| `redisDatabase` | int|true|0 | The Redis database to connect to. |
-| `clientMode` |String| false|Standalone | The client mode when interacting with Redis cluster.
Below are the available options:
Standalone
Cluster |
-| `autoReconnect` | boolean|false|true | Whether the Redis client automatically reconnect or not. |
-| `requestQueue` | int|false|2147483647 | The maximum number of queued requests to Redis. |
-| `tcpNoDelay` |boolean| false| false | Whether to enable TCP with no delay or not. |
-| `keepAlive` | boolean|false | false |Whether to enable a keepalive to Redis or not. |
-| `connectTimeout` |long| false|10000 | The time to wait before timing out when connecting in milliseconds. |
-| `operationTimeout` | long|false|10000 | The time before an operation is marked as timed out in milliseconds . |
-| `batchTimeMs` | int|false|1000 | The Redis operation time in milliseconds. |
-| `batchSize` | int|false|200 | The batch size of writing to Redis database. |
+| Name | Type | Required | Sensitive | Default | Description |
+|--------------------|---------|----------|-----------|--------------------|---------------------------------------------------------------------------------------------------------------------------------|
+| `redisHosts` | String | true | false | " " (empty string) | A comma-separated list of Redis hosts to connect to. |
+| `redisPassword` | String | false | true | " " (empty string) | The password used to connect to Redis. |
+| `redisDatabase` | int | true | false | 0 | The Redis database to connect to. |
+| `clientMode` | String | false | false | Standalone | The client mode when interacting with Redis cluster.
Below are the available options:
Standalone
Cluster |
+| `autoReconnect` | boolean | false | false | true | Whether the Redis client automatically reconnect or not. |
+| `requestQueue` | int | false | false | 2147483647 | The maximum number of queued requests to Redis. |
+| `tcpNoDelay` | boolean | false | false | false | Whether to enable TCP with no delay or not. |
+| `keepAlive` | boolean | false | false | false | Whether to enable a keepalive to Redis or not. |
+| `connectTimeout` | long | false | false | 10000 | The time to wait before timing out when connecting in milliseconds. |
+| `operationTimeout` | long | false | false | 10000 | The time before an operation is marked as timed out in milliseconds . |
+| `batchTimeMs` | int | false | false | 1000 | The Redis operation time in milliseconds. |
+| `batchSize` | int | false | false | 200 | The batch size of writing to Redis database. |
## Example
diff --git a/connectors/solr-sink/v3.1.1.1/solr-sink.md b/connectors/solr-sink/v3.1.1.1/solr-sink.md
index 3c9d5d16b..24c4b12b7 100644
--- a/connectors/solr-sink/v3.1.1.1/solr-sink.md
+++ b/connectors/solr-sink/v3.1.1.1/solr-sink.md
@@ -29,14 +29,14 @@ The configuration of the Solr sink connector has the following properties.
## Property
-| Name | Type|Required | Default | Description
-|------|----------|----------|---------|-------------|
-| `solrUrl` | String|true|" " (empty string) | Comma-separated zookeeper hosts with chroot used in the SolrCloud mode.
**Example**
`localhost:2181,localhost:2182/chroot`
URL to connect to Solr used in standalone mode.
**Example**
`localhost:8983/solr` |
-| `solrMode` | String|true|SolrCloud| The client mode when interacting with the Solr cluster.
Below are the available options:
Standalone
SolrCloud|
-| `solrCollection` |String|true| " " (empty string) | Solr collection name to which records need to be written. |
-| `solrCommitWithinMs` |int| false|10 | The time within million seconds for Solr updating commits.|
-| `username` |String|false| " " (empty string) | The username for basic authentication.
**Note: `usename` is case-sensitive.** |
-| `password` | String|false| " " (empty string) | The password for basic authentication.
**Note: `password` is case-sensitive.** |
+| Name | Type | Required | Sensitive | Default | Description |
+|----------------------|--------|----------|-----------|--------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `solrUrl` | String | true | false | " " (empty string) | Comma-separated zookeeper hosts with chroot used in the SolrCloud mode.
**Example**
`localhost:2181,localhost:2182/chroot`
URL to connect to Solr used in standalone mode.
**Example**
`localhost:8983/solr` |
+| `solrMode` | String | true | false | SolrCloud | The client mode when interacting with the Solr cluster.
Below are the available options:
Standalone
SolrCloud |
+| `solrCollection` | String | true | false | " " (empty string) | Solr collection name to which records need to be written. |
+| `solrCommitWithinMs` | int | false | false | 10 | The time within million seconds for Solr updating commits. |
+| `username` | String | false | true | " " (empty string) | The username for basic authentication.
**Note: `usename` is case-sensitive.** |
+| `password` | String | false | true | " " (empty string) | The password for basic authentication.
**Note: `password` is case-sensitive.** |
## Example
diff --git a/connectors/twitter-firehose-source/v3.1.1.1/twitter-firehose-source.md b/connectors/twitter-firehose-source/v3.1.1.1/twitter-firehose-source.md
index a41044cfc..b8c1e7a9e 100644
--- a/connectors/twitter-firehose-source/v3.1.1.1/twitter-firehose-source.md
+++ b/connectors/twitter-firehose-source/v3.1.1.1/twitter-firehose-source.md
@@ -29,15 +29,15 @@ The configuration of the Twitter Firehose source connector has the following pro
## Property
-| Name | Type|Required | Default | Description
-|------|----------|----------|---------|-------------|
-| `consumerKey` | String|true | " " (empty string) | The twitter OAuth consumer key.
For more information, see [Access tokens](https://developer.twitter.com/en/docs/basics/authentication/guides/access-tokens). |
-| `consumerSecret` | String |true | " " (empty string) | The twitter OAuth consumer secret. |
-| `token` | String|true | " " (empty string) | The twitter OAuth token. |
-| `tokenSecret` | String|true | " " (empty string) | The twitter OAuth secret. |
-| `guestimateTweetTime`|Boolean|false|false|Most firehose events have null createdAt time.
If `guestimateTweetTime` set to true, the connector estimates the createdTime of each firehose event to be current time.
-| `clientName` | String |false | openconnector-twitter-source| The twitter firehose client name. |
-| `clientHosts` |String| false | Constants.STREAM_HOST | The twitter firehose hosts to which client connects. |
-| `clientBufferSize` | int|false | 50000 | The buffer size for buffering tweets fetched from twitter firehose. |
+| Name | Type | Required | Sensitive | Default | Description |
+|-----------------------|---------|----------|-----------|------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `consumerKey` | String | true | true | " " (empty string) | The twitter OAuth consumer key.
For more information, see [Access tokens](https://developer.twitter.com/en/docs/basics/authentication/guides/access-tokens). |
+| `consumerSecret` | String | true | true | " " (empty string) | The twitter OAuth consumer secret. |
+| `token` | String | true | true | " " (empty string) | The twitter OAuth token. |
+| `tokenSecret` | String | true | true | " " (empty string) | The twitter OAuth secret. |
+| `guestimateTweetTime` | Boolean | false | false | false | Most firehose events have null createdAt time.
If `guestimateTweetTime` set to true, the connector estimates the createdTime of each firehose event to be current time. |
+| `clientName` | String | false | false | openconnector-twitter-source | The twitter firehose client name. |
+| `clientHosts` | String | false | false | Constants.STREAM_HOST | The twitter firehose hosts to which client connects. |
+| `clientBufferSize` | int | false | false | 50000 | The buffer size for buffering tweets fetched from twitter firehose. |
> For more information about OAuth credentials, see [Twitter developers portal](https://developer.twitter.com/en.html).