diff --git a/images/dashboard-otel.png b/images/dashboard-otel.png new file mode 100644 index 0000000..c564fc7 Binary files /dev/null and b/images/dashboard-otel.png differ diff --git a/images/dashboard-redis.png b/images/dashboard-redis.png new file mode 100644 index 0000000..c5b23e8 Binary files /dev/null and b/images/dashboard-redis.png differ diff --git a/index.html b/index.html index d526252..94dfd8d 100644 --- a/index.html +++ b/index.html @@ -15,7 +15,7 @@

7.6. Security

@@ -3500,7 +3595,7 @@

String

The name of the directory containing the Redis Connect credentials file. This directory path must include a properties file named redisconnect_credentials_jobmanager.properties -[1].

+[4].

../config/ samples/ credentials

@@ -3564,7 +3659,7 @@

mail.smtp.start.tls.enable

Boolean

-

Set or disable STARTTLS encryption[2].

+

Set or disable STARTTLS encryption[5].

true

@@ -3626,7 +3721,7 @@

10.1. Job p In other words, jobName does not appear in the job configuration payload.

Note: jobName should not be confused with jobId. jobIds are created as part of a job claim. They add-on a namespace to the jobName to identify the jobType and partitionId (if jobType=PARTITIONED_STREAM) -[3].

+[6].

n/a

@@ -3644,7 +3739,7 @@

10.1. Job p

The number of job partitions that can be claimed, and executed, on the same Redis Connect instance (JVM).

If the limit forces partitions to span more instances than are currently deployed, then the job will not be able to start nor migrate -[4].

+[7].

0

@@ -3686,7 +3781,7 @@

Integer

Redis Connect’s pipeline is powered by the LMAX Disruptor library (High Performance Inter-Thread Messaging).

Must be a power of 2, minimum 1024 -[5]

+[8]

4096

@@ -3777,7 +3872,7 @@

Although the producer’s polling event loop enqueues changed-data events in batches, each event is processed individually through the pipeline. This is because Redis Connect updates the checkpoint at the changed-data event level and not the batch -[6].

+[9].

false

@@ -3893,7 +3988,7 @@

<

sourceConnectionRetryDelayInterval

Long

Fixed delay in between sourceConnectionMaxRetryAttempts.

-

Measured in seconds; minimum is 0[7].

+

Measured in seconds; minimum is 0[10].

60

@@ -3995,7 +4090,7 @@

[8].

+[11].

false

@@ -4234,7 +4329,7 @@

credentialsRotationEventListenerEnabled

Boolean

When enabled, the credentialsDirectoryPath will be periodically scanned for changes that are specific to the -property file associated with this database[9].

+property file associated with this database[12].

false

@@ -4320,7 +4415,7 @@

12.1. R

String

Specifies the criteria for running a snapshot when the connector starts. It is not recommended to use this debezium capability for initial load in Production. See Production Readiness for more information. MySQL, Postgres default to 'never'; Oracle, SQL Server default to 'schema_only' -[10].

+[13].

n/a

@@ -4657,7 +4752,7 @@

12.

Maximum number of connections that the pool can create. If all connections are in use, an operation requiring a client-to server-connection is blocked until a connection is available or the free-connection-timeout is reached. If set to -1, there is no maximum. The setting must indicate a cap greater than min-connections -[11].

+[14].

-1

@@ -5066,7 +5161,7 @@

max.queue.size

Integer

Specifies the maximum number of records that the blocking queue can hold. -[12].

+[15].

32768

@@ -5074,7 +5169,7 @@

Boolean

When enabled, batches of changed-data events are persisted to Redis Stream, before they are enqueued within the in-memory queue, which effectively mimics a change-data-capture (CDC) process within Redis Connect. -[13].

+[16].

True

@@ -5092,49 +5187,58 @@


-1. Redis Connect never caches or persists credentials. Therefore, on each connection with the source, target, or job manager database, the credentials are read from a file. This enhances security and allows for seamless credential rotations and integration with secret management frameworks such as HashiCorp Vault. +1. The Gemfire Connector leverages the Apache Geode Client, consequentially Redis Connect supports only one Gemfire Job per Redis Connect Instance.
-2. StartTLS is an extension of the SMTP protocol that tells the email server that the email client wants to use a secure connection using TLS or SSL. +2. Stream jobs in the GemFire connector leverages Durable Events in GemFire. This means that streaming is durable across fail-overs of Redis Connect, subject to the eviction policy and capacity of the GemFire Subscriber Queue.
-3. When jobName is used in logging or administrative processes (i.e., stopJob), the jobName represents ALL job partitions. Must be between 4 and 50 characters. +3. The Gemfire connector only support a single partition per job.
-4. For example, if maxPartitionsPerClusterMember=1 and partitions=3, then the Redis Connect cluster will require at least 3 instances (JVMs) each with at least 1 available capacity to claim a job partition. This is not a global limit; it is only specific at the job level 0 represents no limit +4. Redis Connect never caches or persists credentials. Therefore, on each connection with the source, target, or job manager database, the credentials are read from a file. This enhances security and allows for seamless credential rotations and integration with secret management frameworks such as HashiCorp Vault.
-5. The buffer size sets the number of slots allocated within the Disruptor’s internal ring buffer "queue". Increasing the buffer size will impact the JVM heap space required to store all transient changed data events within the queue. For most cases, this can be left as default. +5. StartTLS is an extension of the SMTP protocol that tells the email server that the email client wants to use a secure connection using TLS or SSL.
-6. When enabled, the checkpoint will be committed as part of an atomic Redis transaction. This eliminates consistency issues and improves performance. Rollback capability is built in to handle any failure scenarios during the transaction so that no data will be lost. When disabled, the checkpoint will be committed after the the changed-data events are written. This adds another network round trip for each changed-data event. While distributed checkpoints have distinct advantages, there is a small tradeoff. Because Redis keys are bound by their hash-slots, distributed checkpoints require that we store one checkpoint per hash slot. When enabled, each job partition will have its own 16384 checkpoint keys created in the target database. With ~250 bytes per checkpoint, each partition’s estimated overhead is ~4MB. When disabled, there is only a single checkpoint key in the target per partition. Distributed checkpoints require RediSearch. We use RediSearch to index checkpoint keys so that recovery from the latest checkpoint is immediate +6. When jobName is used in logging or administrative processes (i.e., stopJob), the jobName represents ALL job partitions. Must be between 4 and 50 characters.
-7. sourceConnectionRetryDelayInterval must be < than sourceConnectionRetryDelayInterval +7. For example, if maxPartitionsPerClusterMember=1 and partitions=3, then the Redis Connect cluster will require at least 3 instances (JVMs) each with at least 1 available capacity to claim a job partition. This is not a global limit; it is only specific at the job level 0 represents no limit
-8. When enabled, the column-level changedColumnOnlyEnabled flag will be overridden for all columns other than those designated as targetKey(s). This is currently only supported for RDBMS sources +8. The buffer size sets the number of slots allocated within the Disruptor’s internal ring buffer "queue". Increasing the buffer size will impact the JVM heap space required to store all transient changed data events within the queue. For most cases, this can be left as default.
-9. If a change is identified, the listener will create a new connection without bringing down the Redis Connect instance nor stopping the job. There might be a momentary pause in pipeline processing while the connection is being reestablished. No data will be lost in this process. Disclaimer: if the new credentials cannot be used to create a connection, the job will be stopped for manual intervention. +9. When enabled, the checkpoint will be committed as part of an atomic Redis transaction. This eliminates consistency issues and improves performance. Rollback capability is built in to handle any failure scenarios during the transaction so that no data will be lost. When disabled, the checkpoint will be committed after the the changed-data events are written. This adds another network round trip for each changed-data event. While distributed checkpoints have distinct advantages, there is a small tradeoff. Because Redis keys are bound by their hash-slots, distributed checkpoints require that we store one checkpoint per hash slot. When enabled, each job partition will have its own 16384 checkpoint keys created in the target database. With ~250 bytes per checkpoint, each partition’s estimated overhead is ~4MB. When disabled, there is only a single checkpoint key in the target per partition. Distributed checkpoints require RediSearch. We use RediSearch to index checkpoint keys so that recovery from the latest checkpoint is immediate
-10. Possible settings are: initial - the connector runs a snapshot only when no offsets have been recorded for the logical server name. initial_only - the connector runs a snapshot only when no offsets have been recorded for the logical server name and then stops; i.e. it will not read change events from the binlog. when_needed - the connector runs a snapshot upon startup whenever it deems it necessary. That is, when no offsets are available, or when a previously recorded offset specifies a binlog location or GTID that is not available in the server. never - the connector never uses snapshots. Upon first startup with a logical server name, the connector reads from the beginning of the binlog. Configure this behavior with care. It is valid only when the binlog is guaranteed to contain the entire history of the database. schema_only - the connector runs a snapshot of the schemas and not the data. This setting is useful when you do not need the topics to contain a consistent snapshot of the data but need them to have only the changes since the connector was started. schema_only_recovery - this is a recovery setting for a connector that has already been capturing changes. When you restart the connector, this setting enables recovery of a corrupted or lost database schema history topic. You might set it periodically to "clean up" a database schema history topic that has been growing unexpectedly. Database schema history topics require infinite retention +10. sourceConnectionRetryDelayInterval must be < than sourceConnectionRetryDelayInterval
-11. If you use this setting to cap your pool connections, deactivate the pool attribute pr-single-hop-enabled. Leaving single hop activated can increase thrashing and lower performance. +11. When enabled, the column-level changedColumnOnlyEnabled flag will be overridden for all columns other than those designated as targetKey(s). This is currently only supported for RDBMS sources
-12. The blocking queue provides backpressure for reading changed data events from the source in cases where the connector ingests messages faster than they are consumed. Events that are held in the queue are disregarded when the connector periodically records offsets. Always set the value of maxQueueSize to be larger than the value of maxBatchSize +12. If a change is identified, the listener will create a new connection without bringing down the Redis Connect instance nor stopping the job. There might be a momentary pause in pipeline processing while the connection is being reestablished. No data will be lost in this process. Disclaimer: if the new credentials cannot be used to create a connection, the job will be stopped for manual intervention.
-13. Persistence allows for recovery of transient changed-data events which is critical for sources like Splunk that do not implement their own change-data-capture (CDC) process. Only supported by Gemfire and Splunk +13. Possible settings are: initial - the connector runs a snapshot only when no offsets have been recorded for the logical server name. initial_only - the connector runs a snapshot only when no offsets have been recorded for the logical server name and then stops; i.e. it will not read change events from the binlog. when_needed - the connector runs a snapshot upon startup whenever it deems it necessary. That is, when no offsets are available, or when a previously recorded offset specifies a binlog location or GTID that is not available in the server. never - the connector never uses snapshots. Upon first startup with a logical server name, the connector reads from the beginning of the binlog. Configure this behavior with care. It is valid only when the binlog is guaranteed to contain the entire history of the database. schema_only - the connector runs a snapshot of the schemas and not the data. This setting is useful when you do not need the topics to contain a consistent snapshot of the data but need them to have only the changes since the connector was started. schema_only_recovery - this is a recovery setting for a connector that has already been capturing changes. When you restart the connector, this setting enables recovery of a corrupted or lost database schema history topic. You might set it periodically to "clean up" a database schema history topic that has been growing unexpectedly. Database schema history topics require infinite retention +
+
+14. If you use this setting to cap your pool connections, deactivate the pool attribute pr-single-hop-enabled. Leaving single hop activated can increase thrashing and lower performance. +
+
+15. The blocking queue provides backpressure for reading changed data events from the source in cases where the connector ingests messages faster than they are consumed. Events that are held in the queue are disregarded when the connector periodically records offsets. Always set the value of maxQueueSize to be larger than the value of maxBatchSize +
+
+16. Persistence allows for recovery of transient changed-data events which is critical for sources like Splunk that do not implement their own change-data-capture (CDC) process. Only supported by Gemfire and Splunk

diff --git a/redis-connect-0.11.2.pdf b/redis-connect-0.11.2.pdf new file mode 100644 index 0000000..56b6ef3 Binary files /dev/null and b/redis-connect-0.11.2.pdf differ