diff --git a/docs/encyclopedia/clusters.mdx b/docs/encyclopedia/clusters.mdx
index 4d7ebbb6f2..c2e6a6d5c8 100644
--- a/docs/encyclopedia/clusters.mdx
+++ b/docs/encyclopedia/clusters.mdx
@@ -1,9 +1,9 @@
---
id: clusters
-title: What is a Temporal Cluster?
-sidebar_label: Clusters
+title: What is a Temporal Service?
+sidebar_label: Temporal Service
sidebar_position: 8
-description: This guide provides a comprehensive overview of Temporal Clusters.
+description: This guide provides a comprehensive overview of the Temporal Service.
slug: /clusters
toc_max_heading_level: 4
keywords:
@@ -14,13 +14,19 @@ tags:
- term
---
-This guide provides a comprehensive overview of Temporal Clusters.
+:::info
+Please note an important update in our terminology.
-A Temporal Cluster is the group of services, known as the [Temporal Server](#temporal-server), combined with [Persistence](#persistence) and [Visibility](#visibility) stores, that together act as a component of the Temporal Platform.
+We now refer to the Temporal Cluster as the Temporal Service.
+:::
+
+This page provides a comprehensive technical overview of a Temporal Service.
+
+A Temporal Service is the group of services, known as the [Temporal Server](#temporal-server), combined with [Persistence](#persistence) and [Visibility](#visibility) stores, that together act as a component of the Temporal Platform.
-- [Cluster deployment guide](/self-hosted-guide)
+See the Self-hosted Temporal Service [production deployment guide](/self-hosted-guide) for implementation guidance.
-
+
## What is the Temporal Server? {#temporal-server}
@@ -31,11 +37,11 @@ The Temporal Server consists of four independently scalable services:
- Matching subsystem: hosts Task Queues for dispatching.
- Worker Service: for internal background Workflows.
-For example, a real-life production deployment can have 5 Frontend, 15 History, 17 Matching, and 3 Worker Services per cluster.
+For example, a real-life production deployment can have 5 Frontend, 15 History, 17 Matching, and 3 Worker Services per Temporal Service.
The Temporal Server services can run independently or be grouped together into shared processes on one or more physical or virtual machines.
For live (production) environments, we recommend that each service runs independently, because each one has different scaling requirements and troubleshooting becomes easier.
-The History, Matching, and Worker Services can scale horizontally within a Cluster.
+The History, Matching, and Worker Services can scale horizontally within a Temporal Service.
The Frontend Service scales differently than the others because it has no sharding or partitioning; it is just stateless.
Each service is aware of the others, including scaled instances, through a membership protocol via [Ringpop](https://github.com/temporalio/ringpop-go).
@@ -45,7 +51,7 @@ Each service is aware of the others, including scaled instances, through a membe
All Temporal Server releases abide by the [Semantic Versioning Specification](https://semver.org/).
We support upgrade paths from every version beginning with Temporal v1.7.0.
-For details on upgrading your Temporal Cluster, see [Upgrade Server](/self-hosted-guide/upgrade-server#upgrade-server).
+For details on upgrading your Temporal Service, see [Upgrade Server](/self-hosted-guide/upgrade-server#upgrade-server).
We provide maintenance support for previously published minor and major versions by continuing to release critical bug fixes related to security, the prevention of data loss, and reliability, whenever they are found.
@@ -81,10 +87,10 @@ Types of inbound calls include the following:
- Worker polls
- [Visibility](#visibility) requests
- [Temporal CLI](/cli) (the Temporal CLI) operations
-- Calls from a remote Cluster related to [Multi-Cluster Replication](#multi-cluster-replication)
+- Calls from a remote Temporal Service related to [Multi-Cluster Replication](#multi-cluster-replication)
Every inbound request related to a Workflow Execution must have a Workflow Id, which is hashed for routing purposes.
-The Frontend Service has access to the hash rings that maintain service membership information, including how many nodes (instances of each service) are in the Cluster.
+The Frontend Service has access to the hash rings that maintain service membership information, including how many nodes (instances of each service) are in the Temporal Service.
Inbound call rate limiting is applied per host and per namespace.
@@ -93,7 +99,7 @@ The Frontend Service talks to the Matching Service, History Service, Worker Serv
- It uses the grpcPort 7233 to host the service handler.
- It uses port 6933 for membership-related communication.
-Ports are configurable in the Cluster configuration.
+Ports are configurable in the Temporal Service configuration.
### What is a History Service? {#history-service}
@@ -104,15 +110,15 @@ From there, a Worker can poll for work, receive this updated history, and resume
- Block diagram of how the History Service relates to the other services of the Temporal Server and to a Temporal
- Cluster
+ Block diagram of how the History Service relates to the other services of the Temporal Server and to the Temporal
+ Service
@@ -121,31 +127,31 @@ The total number of History Service processes can be between 1 and the total num
An individual History Service can support many History Shards.
Temporal recommends starting at a ratio of 1 History Service process for every 500 History Shards.
-Although the total number of History Shards remains static for the life of the Cluster, the number of History Service processess can change.
+Although the total number of History Shards remains static for the life of the Temporal Service, the number of History Service processes can change.
The History Service talks to the Matching Service and the database.
- It uses grpcPort 7234 to host the service handler.
- It uses port 6934 for membership-related communication.
-Ports are configurable in the Cluster configuration.
+Ports are configurable in the Temporal Service configuration.
#### What is a History Shard? {#history-shard}
-A History Shard is an important unit within a Temporal Cluster by which concurrent Workflow Execution throughput can be scaled.
+A History Shard is an important unit within a Temporal Service by which concurrent Workflow Execution throughput can be scaled.
Each History Shard maps to a single persistence partition.
A History Shard assumes that only one concurrent operation can be within a partition at a time.
-In essence, the number of History Shards represents the number of concurrent database operations that can occur for a Cluster.
-This means that the number of History Shards in a Temporal Cluster plays a significant role in the performance of your Temporal Application.
+In essence, the number of History Shards represents the number of concurrent database operations that can occur for a Temporal Service.
+This means that the number of History Shards in a Temporal Service plays a significant role in the performance of your Temporal Application.
-Before integrating a database, the total number of History Shards for the Temporal Cluster must be chosen and set in the Cluster's configuration (see [persistence](/references/configuration#persistence)).
-After the Shard count is configured and the database integrated, the total number of History Shards for the Cluster cannot be changed.
+Before integrating a database, the total number of History Shards for the Temporal Service must be chosen and set in the Temporal Service's configuration (see [persistence](/references/configuration#persistence)).
+After the Shard count is configured and the database integrated, the total number of History Shards for the Temporal Service cannot be changed.
-In theory, a Temporal Cluster can operate with an unlimited number of History Shards, but each History Shard adds compute overhead to the Cluster.
-Temporal Clusters have operated successfully using anywhere from 1 to 128K History Shards, with each Shard responsible for tens of thousands of Workflow Executions.
+In theory, a Temporal Service can operate with an unlimited number of History Shards, but each History Shard adds compute overhead to the Temporal Service.
+The Temporal Service has operated successfully using anywhere from 1 to 128K History Shards, with each Shard responsible for tens of thousands of Workflow Executions.
One Shard is useful only in small scale setups designed for testing, while 128k Shards is useful only in very large scale production environments.
-The correct number of History Shards for any given Cluster depends entirely on the Temporal Application that it is supporting and the type of database.
+The correct number of History Shards for any given Temporal Service depends entirely on the Temporal Application that it is supporting and the type of database.
A History Shard is represented as a hashed integer.
Each Workflow Execution is automatically assigned to a History Shard.
@@ -181,7 +187,7 @@ It talks to the Frontend Service, History Service, and the database.
- It uses grpcPort 7235 to host the service handler.
- It uses port 6935 for membership related communication.
-Ports are configurable in the Cluster configuration.
+Ports are configurable in the Temporal Service configuration.
### What is a Worker Service? {#worker-service}
@@ -200,11 +206,11 @@ It talks to the Frontend Service.
- It uses port 6939 for membership-related communication.
-Ports are configurable in the Cluster configuration.
+Ports are configurable in the Temporal Service configuration.
### What is a Retention Period? {#retention-period}
-Retention Period is the duration for which the Temporal Cluster stores data associated with closed Workflow Executions on a Namespace in the Persistence store.
+Retention Period is the duration for which the Temporal Service stores data associated with closed Workflow Executions on a Namespace in the Persistence store.
- [How to set the Retention Period for a Namespace](/cli/operator#create)
- [How to set the Retention Period for a Namespace using the Go SDK](/dev-guide/go/features#namespaces)
@@ -212,11 +218,11 @@ Retention Period is the duration for which the Temporal Cluster stores data asso
A Retention Period applies to all closed Workflow Executions within a [Namespace](/namespaces) and is set when the Namespace is registered.
-The Temporal Cluster triggers a Timer task at the end of the Retention Period that cleans up the data associated with the closed Workflow Execution on that Namespace.
+The Temporal Service triggers a Timer task at the end of the Retention Period that cleans up the data associated with the closed Workflow Execution on that Namespace.
The minimum Retention Period is 1 day.
-On Temporal Cluster version 1.18 and later, the maximum Retention Period value for Namespaces can be set to anything over the minimum requirement of 1 day. Ensure that your Persistence store has enough capacity for the storage.
-On Temporal Cluster versions 1.17 and earlier, the maximum Retention Period you can set is 30 days.
+On Temporal Service version 1.18 and later, the maximum Retention Period value for Namespaces can be set to anything over the minimum requirement of 1 day. Ensure that your Persistence store has enough capacity for the storage.
+On Temporal Service versions 1.17 and earlier, the maximum Retention Period you can set is 30 days.
Setting the Retention Period to 0 results in the error _A valid retention period is not set on request_.
If you don't set the Retention Period value when using the [`temporal operator namespace create`](/cli/operator#create) command, it defaults to 3 days.
@@ -228,9 +234,9 @@ When changing the Retention Period, the new duration applies to Workflow Executi
## What is Persistence? {#persistence}
-The Temporal Persistence store is a database used by [Temporal Services](#temporal-server) to persist events generated and processed in your Temporal Cluster and SDK.
+The Temporal Persistence store is a database used by the [Temporal Server](#temporal-server) to persist events generated and processed in your Temporal Service and SDK.
-A Temporal Cluster's only required dependency for basic operation is the Persistence database.
+A Temporal Service's only required dependency for basic operation is the Persistence database.
Multiple types of databases are supported.
@@ -248,11 +254,11 @@ The database stores the following types of data:
- State of Workflow Executions:
- Execution table: A capture of the mutable state of Workflow Executions.
- History table: An append-only log of Workflow Execution History Events.
-- Namespace metadata: Metadata of each Namespace in the Cluster.
+- Namespace metadata: Metadata of each Namespace in the Temporal Service.
- [Visibility](#visibility) data: Enables operations like "show all running Workflow Executions".
For production environments, we recommend using Elasticsearch as your Visibility store.
-An Elasticsearch database must be configured in a self-hosted Cluster to enable [advanced Visibility](/visibility#advanced-visibility) on Temporal Server versions 1.19.1 and earlier.
+An Elasticsearch database must be configured in a self-hosted Temporal Service to enable [advanced Visibility](/visibility#advanced-visibility) on Temporal Server versions 1.19.1 and earlier.
With Temporal Server version 1.20 and later, advanced Visibility features are available on SQL databases like MySQL (version 8.0.17 and later), PostgreSQL (version 12 and later), SQLite (v3.31.0 and later), and Elasticsearch.
@@ -278,13 +284,13 @@ You can verify supported databases in the [Temporal Server release notes](https:
- For Temporal Server v1.19 and earlier, all supported databases for Visibility provide standard Visibility features, and an Elasticsearch database is required for advanced Visibility features.
- For Temporal Server v1.20 and later, advanced Visibility features are enabled on all supported SQL databases, in addition to Elasticsearch.
-- In Temporal Server v1.21 and later, standard Visibility is no longer in development, and we recommend migrating to a [database that supports Advanced Visibility features](/self-hosted-guide/visibility). The Visibility configuration for Temporal Clusters has been updated and Dual Visibility is enabled. For details, see [Visibility store setup](/self-hosted-guide/visibility).
+- In Temporal Server v1.21 and later, standard Visibility is no longer in development, and we recommend migrating to a [database that supports Advanced Visibility features](/self-hosted-guide/visibility). The Visibility configuration for the Temporal Service has been updated and Dual Visibility is enabled. For details, see [Visibility store setup](/self-hosted-guide/visibility).
:::
-The term [Visibility](/visibility), within the Temporal Platform, refers to the subsystems and APIs that enable an operator to view, filter, and search for Workflow Executions that currently exist within a Cluster.
+The term [Visibility](/visibility), within the Temporal Platform, refers to the subsystems and APIs that enable an operator to view, filter, and search for Workflow Executions that currently exist within a Temporal Service.
-The [Visibility store](/self-hosted-guide/visibility) in your Temporal Cluster stores persisted Workflow Execution Event History data and is set up as a part of your [Persistence store](#persistence) to enable listing and filtering details about Workflow Executions that exist on your Temporal Cluster.
+The [Visibility store](/self-hosted-guide/visibility) in your Temporal Service stores persisted Workflow Execution Event History data and is set up as a part of your [Persistence store](#persistence) to enable listing and filtering details about Workflow Executions that exist on your Temporal Service.
- [How to set up a Visibility store](/self-hosted-guide/visibility)
@@ -296,7 +302,7 @@ Support for separate standard and advanced Visibility setups will be deprecated
## What is Archival? {#archival}
-Archival is a feature that automatically backs up [Event Histories](/workflows#event-history) and Visibility records from Temporal Cluster persistence to a custom blob store.
+Archival is a feature that automatically backs up [Event Histories](/workflows#event-history) and Visibility records from Temporal Service persistence to a custom blob store.
- [How to create a custom Archiver](/self-hosted-guide/archival#custom-archiver)
- [How to set up Archival](/self-hosted-guide/archival#set-up-archival)
@@ -304,7 +310,7 @@ Archival is a feature that automatically backs up [Event Histories](/workflows#e
Workflow Execution Event Histories are backed up after the [Retention Period](#retention-period) is reached.
Visibility records are backed up immediately after a Workflow Execution reaches a Closed status.
-Archival enables Workflow Execution data to persist as long as needed, while not overwhelming the Cluster's persistence store.
+Archival enables Workflow Execution data to persist as long as needed, while not overwhelming the Temporal Service's persistence store.
This feature is helpful for compliance and debugging.
@@ -312,20 +318,20 @@ Temporal's Archival feature is considered **experimental** and not subject to no
Archival is not supported when running Temporal through Docker and is disabled by default when installing the system manually and when deploying through [helm charts](https://github.com/temporalio/helm-charts/blob/master/templates/server-configmap.yaml) (but can be enabled in the [config](https://github.com/temporalio/temporal/blob/master/config/development.yaml)).
-## What is Cluster configuration? {#cluster-configuration}
+## What is Temporal Service configuration? {#cluster-configuration}
-Cluster configuration is the setup and configuration details of your self-hosted Temporal Cluster, defined using YAML.
-You must define your Cluster configuration when setting up your self-hosted Temporal Cluster.
+Temporal Service configuration is the setup and configuration details of your self-hosted Temporal Service, defined using YAML.
+You must define your Temporal Service configuration when setting up your self-hosted Temporal Service.
For details on using Temporal Cloud, see [Temporal Cloud documentation](/cloud).
-Cluster configuration is composed of two types of configuration: [Static configuration](#static-configuration) and [Dynamic configuration](#dynamic-configuration).
+Temporal Service configuration is composed of two types of configuration: [Static configuration](#static-configuration) and [Dynamic configuration](#dynamic-configuration).
### Static configuration
-Static configuration contains details of how the Cluster should be set up.
+Static configuration contains details of how the Temporal Service should be set up.
The static configuration is read just once and used to configure service nodes at startup.
-Depending on how you want to deploy your self-hosted Temporal Cluster, your static configuration must contain details for setting up:
+Depending on how you want to deploy your self-hosted Temporal Service, your static configuration must contain details for setting up:
- Temporal Services—Frontend, History, Matching, Worker
- Membership ports for the Temporal Services
@@ -333,68 +339,68 @@ Depending on how you want to deploy your self-hosted Temporal Cluster, your stat
- TLS, authentication, authorization
- Server log level
- Metrics
-- Cluster metadata
+- Temporal Service metadata
- Dynamic config Client
Static configuration values cannot be changed at runtime.
-Some values, such as the Metrics configuration or Server log level can be changed in the static configuration but require restarting the Cluster for the changes to take effect.
+Some values, such as the Metrics configuration or Server log level can be changed in the static configuration but require restarting the Temporal Service for the changes to take effect.
-For details on static configuration keys, see [Cluster configuration reference](/references/configuration).
+For details on static configuration keys, see [Temporal Service configuration reference](/references/configuration).
For static configuration examples, see [https://github.com/temporalio/temporal/tree/master/config](https://github.com/temporalio/temporal/tree/master/config).
### Dynamic configuration
-Dynamic configuration contains configuration keys that you can update in your Cluster setup without having to restart the server processes.
+Dynamic configuration contains configuration keys that you can update in your Temporal Service setup without having to restart the server processes.
-All dynamic configuration keys provided by Temporal have default values that are used by the Cluster.
+All dynamic configuration keys provided by Temporal have default values that are used by the Temporal Service.
You can override the default values by setting different values for the keys in a YAML file and setting the [dynamic configuration client](/references/configuration#dynamicconfigclient) to poll this file for updates.
-Setting dynamic configuration for your Cluster is optional.
+Setting dynamic configuration for your Temporal Service is optional.
-Setting overrides for some configuration keys updates the Cluster configuration immediately.
+Setting overrides for some configuration keys updates the Temporal Service configuration immediately.
However, for configuration fields that are checked at startup (such as thread pool size), you must restart the server for the changes to take effect.
-Use dynamic configuration keys to fine-tune your self-deployed Cluster setup.
+Use dynamic configuration keys to fine-tune your self-deployed Temporal Service setup.
For details on dynamic configuration keys, see [Dynamic configuration reference](/references/dynamic-configuration).
For dynamic configuration examples, see [https://github.com/temporalio/temporal/tree/master/config/dynamicconfig](https://github.com/temporalio/temporal/tree/master/config/dynamicconfig).
-### What is Cluster security configuration? {#temporal-cluster-security-configuration}
+### What is Temporal Service security configuration? {#temporal-cluster-security-configuration}
-Secure your Temporal Cluster (self-hosted and Temporal Cloud) by encrypting your network communication and setting authentication and authorization protocols for API calls.
+Secure your Temporal Service (self-hosted and Temporal Cloud) by encrypting your network communication and setting authentication and authorization protocols for API calls.
-For details on setting up your Temporal Cluster security, see [Temporal Platform security features](/security).
+For details on setting up your Temporal Service security, see [Temporal Platform security features](/security).
#### mTLS encryption
-Temporal supports Mutual Transport Layer Security (mTLS) to encrypt network traffic between services within a Temporal Cluster, or between application processes and a Cluster.
+Temporal supports Mutual Transport Layer Security (mTLS) to encrypt network traffic between services within a Temporal Service, or between application processes and a Temporal Service.
-On self-hosted Temporal Clusters, configure mTLS in the `tls` section of the [Cluster configuration](/references/configuration#tls).
+On the self-hosted Temporal Service, configure mTLS in the `tls` section of the [Temporal Service configuration](/references/configuration#tls).
mTLS configuration is a [static configuration](#static-configuration) property.
-You can then use either the [`WithConfig`](/references/server-options#withconfig) or [`WithConfigLoader`](/references/server-options#withconfigloader) server option to start your Temporal Cluster with this configuration.
+You can then use either the [`WithConfig`](/references/server-options#withconfig) or [`WithConfigLoader`](/references/server-options#withconfigloader) server option to start your Temporal Service with this configuration.
-The mTLS configuration includes two sections that serve to separate communication within a Temporal Cluster and client calls made from your application to the Cluster.
+The mTLS configuration includes two sections that serve to separate communication within a Temporal Service and client calls made from your application to the Temporal Service.
-- `internode`: configuration for encrypting communication between nodes within the Cluster.
+- `internode`: configuration for encrypting communication between nodes within the Temporal Service.
- `frontend`: configuration for encrypting the public endpoints of the Frontend Service.
Setting mTLS for `internode` and `frontend` separately lets you use different certificates and settings to encrypt each section of traffic.
#### Using certificates for Client connections
-Use CA certificates to authenticate client connections to your Temporal Cluster.
+Use CA certificates to authenticate client connections to your Temporal Service.
On Temporal Cloud, you can [set your CA certificates in your Temporal Cloud settings](/cloud/certificates) and use the end-entity certificates in your client calls.
-On self-hosted Temporal Clusters, you can restrict access to Temporal Cluster endpoints by using the `clientCAFiles` or `clientCAData` property and the [`requireClientAuth`](/references/configuration#tls) property in your Cluster configuration.
+On the self-hosted Temporal Service, you can restrict access to Temporal Service endpoints by using the `clientCAFiles` or `clientCAData` property and the [`requireClientAuth`](/references/configuration#tls) property in your Temporal Service configuration.
These properties can be specified in both the `internode` and `frontend` sections of the [mTLS configuration](/references/configuration#tls).
For details, see the [tls configuration reference](/references/configuration#tls).
#### Server name specification
-On self-hosted Temporal Clusters, you can specify `serverName` in the `client` section of your mTLS configuration to prevent spoofing and [MITM attacks](https://en.wikipedia.org/wiki/Man-in-the-middle_attack).
+On the self-hosted Temporal Service, you can specify `serverName` in the `client` section of your mTLS configuration to prevent spoofing and [MITM attacks](https://en.wikipedia.org/wiki/Man-in-the-middle_attack).
Entering a value for `serverName` enables established connections to authenticate the endpoint.
This ensures that the server certificate presented to any connected client has the specified server name in its CN property.
@@ -407,7 +413,7 @@ For more information on mTLS configuration, see [tls configuration reference](/r
+**Authorization** is the verification of applications and data that a user on your Temporal Service or application has access to. -->
Temporal provides authentication interfaces that can be set to restrict access to your data.
These protocols address three areas: servers, client connections, and users.
@@ -420,9 +426,9 @@ Temporal offers two plugin interfaces for authentication and authorization of AP
The logic of both plugins can be customized to fit a variety of use cases.
When plugins are provided, the Frontend Service invokes their implementation before running the requested operation.
-### What is Cluster observability? {#monitoring-and-observation}
+### What is Temporal Service observability? {#monitoring-and-observation}
-You can monitor and observe performance with metrics emitted by your self-hosted Temporal Cluster or by Temporal Cloud.
+You can monitor and observe performance with metrics emitted by your self-hosted Temporal Service or by Temporal Cloud.
Temporal emits metrics by default in a format that is supported by Prometheus.
Any metrics software that supports the same format can be used.
@@ -438,13 +444,13 @@ For details on Cloud metrics and setup, see the following:
- [Temporal Cloud metrics reference](/cloud/metrics/)
- [Set up Grafana with Temporal Cloud observability to view metrics](/cloud/metrics/prometheus-grafana#grafana-data-sources-configuration)
-On self-hosted Temporal Clusters, expose Prometheus endpoints in your Cluster configuration and configure Prometheus to scrape metrics from the endpoints.
+On the self-hosted Temporal Service, expose Prometheus endpoints in your Temporal Service configuration and configure Prometheus to scrape metrics from the endpoints.
You can then set up your observability platform (such as Grafana) to use Prometheus as a data source.
-For details on self-hosted Cluster metrics and setup, see the following:
+For details on self-hosted Temporal Service metrics and setup, see the following:
-- [Temporal Cluster OSS metrics reference](/references/cluster-metrics)
-- [Set up Prometheus and Grafana to view SDK and self-hosted Cluster metrics](/self-hosted-guide/monitoring)
+- [Temporal Service OSS metrics reference](/references/cluster-metrics)
+- [Set up Prometheus and Grafana to view SDK and self-hosted Temporal Service metrics](/self-hosted-guide/monitoring)
## What is Multi-Cluster Replication? {#multi-cluster-replication}
@@ -630,7 +636,7 @@ View in both Cluster A & B
-Since Temporal is AP, during failover (change of active Temporal Cluster Namespace), there can exist cases where more than one Cluster can modify a Workflow Execution, causing divergence of Workflow Execution History. Below shows how the version history will look like under such conditions.
+Since Temporal is AP, during failover (change of active Temporal Service Namespace), there can exist cases where more than one Cluster can modify a Workflow Execution, causing divergence of Workflow Execution History. Below shows how the version history will look like under such conditions.
diff --git a/docs/glossary.md b/docs/glossary.md
index f62abc2779..1d43470f3f 100644
--- a/docs/glossary.md
+++ b/docs/glossary.md
@@ -37,7 +37,9 @@ _Tags: [term](/tags/term), [explanation](/tags/explanation)_
#### [Activity Heartbeat](/activities#activity-heartbeat)
-An Activity Heartbeat is a ping from the Worker that is executing the Activity to the Temporal Cluster. Each ping informs the Temporal Cluster that the Activity Execution is making progress and the Worker has not crashed.
+An Activity Heartbeat is a ping from the Worker that is executing the Activity to the Temporal Service.
+
+Each ping informs the Temporal Service that the Activity Execution is making progress and the Worker has not crashed.
_Tags: [term](/tags/term), [explanation](/tags/explanation)_
@@ -67,7 +69,7 @@ _Tags: [term](/tags/term), [explanation](/tags/explanation)_
#### [Archival](/clusters#archival)
-Archival is a feature that automatically backs up Event Histories from Temporal Cluster persistence to a custom blob store after the Closed Workflow Execution retention period is reached.
+Archival is a feature specific to a Self-hosted Temporal Service that automatically backs up Event Histories from Temporal Service persistence to a custom blob store after the Closed Workflow Execution retention period is reached.
_Tags: [term](/tags/term), [explanation](/tags/explanation)_
@@ -101,12 +103,6 @@ The Claim Mapper component is a pluggable component that extracts Claims from JS
_Tags: [term](/tags/term)_
-#### [Cluster configuration](/clusters#cluster-configuration)
-
-Cluster Configuration is the setup and configuration details of your Temporal Cluster, defined using YAML.
-
-_Tags: [term](/tags/term), [explanation](/tags/explanation)_
-
#### [Codec Server](/dataconversion#codec-server)
A Codec Server is an HTTP server that uses your custom Payload Codec to encode and decode your data remotely through endpoints.
@@ -115,7 +111,7 @@ _Tags: [term](/tags/term)_
#### [Command](/workflows#command)
-A Command is a requested action issued by a Worker to the Temporal Cluster after a Workflow Task Execution completes.
+A Command is a requested action issued by a Worker to the Temporal Service after a Workflow Task Execution completes.
_Tags: [term](/tags/term), [explanation](/tags/explanation)_
@@ -133,7 +129,7 @@ _Tags: [term](/tags/term), [explanation](/tags/explanation)_
#### [Data Converter](/dataconversion)
-A Data Converter is a Temporal SDK component that serializes and encodes data entering and exiting a Temporal Cluster.
+A Data Converter is a Temporal SDK component that serializes and encodes data entering and exiting a Temporal Service.
_Tags: [term](/tags/term), [explanation](/tags/explanation)_
@@ -151,7 +147,7 @@ _Tags: [term](/tags/term), [explanation](/tags/explanation), [delay-workflow](/t
#### [Dual Visibility](/visibility#dual-visibility)
-Dual Visibility is a feature that lets you set a secondary Visibility store in your Temporal Cluster to facilitate migrating your Visibility data from one database to another.
+Dual Visibility is a feature, specific to a Self-hosted Temporal Service, that lets you set a secondary Visibility store in your Temporal Service to facilitate migrating your Visibility data from one database to another.
_Tags: [term](/tags/term), [explanation](/tags/explanation), [filtered-lists](/tags/filtered-lists), [visibility](/tags/visibility)_
@@ -169,7 +165,7 @@ _Tags: [term](/tags/term), [explanation](/tags/explanation)_
#### [Event](/workflows#event)
-Events are created by the Temporal Cluster in response to external occurrences and Commands generated by a Workflow Execution.
+Events are created by a Temporal Service in response to external occurrences and Commands generated by a Workflow Execution.
_Tags: [term](/tags/term), [explanation](/tags/explanation)_
@@ -229,7 +225,7 @@ _Tags: [term](/tags/term)_
#### [History Shard](/clusters#history-shard)
-A History Shard is an important unit within a Temporal Cluster by which the scale of concurrent Workflow Execution throughput can be measured.
+A History Shard is an important unit within a Temporal Service by which the scale of concurrent Workflow Execution throughput can be measured.
_Tags: [term](/tags/term)_
@@ -293,12 +289,6 @@ A Payload Converter serializes data, converting objects or values to bytes and b
_Tags: [term](/tags/term), [explanation](/tags/explanation)_
-#### [Persistence](/clusters#persistence)
-
-The Temporal Persistence store is a database used by Temporal Services to persist events generated and processed in the Temporal Cluster and SDK.
-
-_Tags: [term](/tags/term), [explanation](/tags/explanation)_
-
#### [Pre-release](/evaluate/release-stages#pre-release)
Learn more about the Pre-release stage
@@ -331,7 +321,7 @@ _Tags: [term](/tags/term), [resets](/tags/resets), [explanation](/tags/explanati
#### [Retention Period](/clusters#retention-period)
-A Retention Period is the amount of time a Workflow Execution Event History remains in the Cluster's persistence store.
+A Retention Period is the amount of time a Workflow Execution Event History remains in the Temporal Service's persistence store.
_Tags: [term](/tags/term), [explanation](/tags/explanation)_
@@ -451,7 +441,7 @@ _Tags: [term](/tags/term), [cli](/tags/cli)_
#### [Temporal Client](/encyclopedia/temporal-sdks#temporal-client)
-A Temporal Client, provided by a Temporal SDK, provides a set of APIs to communicate with a Temporal Cluster.
+A Temporal Client, provided by a Temporal SDK, provides a set of APIs to communicate with a Temporal Service.
_Tags: [term](/tags/term), [explanation](/tags/explanation)_
@@ -486,8 +476,18 @@ A Cloud gRPC Endpoint is a Namespace-specific address used to access Temporal Cl
_Tags: [term](/tags/term), [explanation](/tags/explanation)_
#### [Temporal Cluster](/clusters)
+The term "Temporal Cluster" is being phased out.
+Instead the term [Temporal Service](#temporal-service) is now being used.
+
+#### [Temporal Service](/clusters)
-A Temporal Cluster is a Temporal Server paired with Persistence and Visibility stores.
+A Temporal Service is a Temporal Server paired with Persistence and Visibility stores.
+
+_Tags: [term](/tags/term), [explanation](/tags/explanation)_
+
+#### [Temporal Service configuration](/clusters#cluster-configuration)
+
+Temporal Service configuration is the setup and configuration details of your Temporal Service, defined using YAML.
_Tags: [term](/tags/term), [explanation](/tags/explanation)_
@@ -499,13 +499,13 @@ _Tags: [term](/tags/term), [explanation](/tags/explanation)_
#### [Temporal Platform](/temporal#temporal-platform)
-The Temporal Platform consists of a Temporal Cluster and Worker Processes.
+The Temporal Platform consists of a Temporal Service and Worker Processes.
_Tags: [term](/tags/term), [explanation](/tags/explanation)_
#### [Temporal SDK](/encyclopedia/temporal-sdks)
-A Temporal SDK is a language-specific library that offers APIs to construct and use a Temporal Client to communicate with a Temporal Cluster, develop Workflow Definitions, and develop Worker Programs.
+A Temporal SDK is a language-specific library that offers APIs to construct and use a Temporal Client to communicate with a Temporal Service, develop Workflow Definitions, and develop Worker Programs.
_Tags: [term](/tags/term), [explanation](/tags/explanation)_
@@ -515,6 +515,7 @@ The Temporal Server is a grouping of four horizontally scalable services.
_Tags: [term](/tags/term), [explanation](/tags/explanation)_
+
#### [Temporal Web UI](/web-ui)
The Temporal Web UI provides users with Workflow Execution state and metadata for debugging purposes.
@@ -535,7 +536,7 @@ _Tags: [term](/tags/term), [updates](/tags/updates), [explanation](/tags/explana
#### [Visibility](/clusters#visibility)
-The term Visibility, within the Temporal Platform, refers to the subsystems and APIs that enable an operator to view Workflow Executions that currently exist within a Cluster.
+The term Visibility, within the Temporal Platform, refers to the subsystems and APIs that enable an operator to view Workflow Executions that currently exist within a Temporal Service.
_Tags: [term](/tags/term)_
@@ -649,5 +650,5 @@ _Tags: [term](/tags/term), [explanation](/tags/explanation)_
#### tctl (_deprecated_)
-tctl is a command-line tool that you can use to interact with a Temporal Cluster.
+tctl is a command-line tool that you can use to interact with a Temporal Service.
It is superceded by the [Temporal CLI utility](#cli)
diff --git a/static/diagrams/temporal-cluster.svg b/static/diagrams/temporal-cluster.svg
index 661bcd9028..124d2bcdef 100644
--- a/static/diagrams/temporal-cluster.svg
+++ b/static/diagrams/temporal-cluster.svg
@@ -1 +1 @@
-
\ No newline at end of file
+
\ No newline at end of file
diff --git a/static/diagrams/temporal-database.svg b/static/diagrams/temporal-database.svg
index d86c5938dc..499f5fbe77 100644
--- a/static/diagrams/temporal-database.svg
+++ b/static/diagrams/temporal-database.svg
@@ -1 +1 @@
-
\ No newline at end of file
+
\ No newline at end of file
diff --git a/static/diagrams/temporal-frontend-service.svg b/static/diagrams/temporal-frontend-service.svg
index 0365560e8f..827a938c12 100644
--- a/static/diagrams/temporal-frontend-service.svg
+++ b/static/diagrams/temporal-frontend-service.svg
@@ -1 +1 @@
-
\ No newline at end of file
+
\ No newline at end of file
diff --git a/static/diagrams/temporal-history-service.svg b/static/diagrams/temporal-history-service.svg
index 2dbe0e934d..0e5c1bd9be 100644
--- a/static/diagrams/temporal-history-service.svg
+++ b/static/diagrams/temporal-history-service.svg
@@ -1 +1 @@
-
\ No newline at end of file
+
\ No newline at end of file
diff --git a/static/diagrams/temporal-matching-service.svg b/static/diagrams/temporal-matching-service.svg
index 17e1a316a1..149194ce5e 100644
--- a/static/diagrams/temporal-matching-service.svg
+++ b/static/diagrams/temporal-matching-service.svg
@@ -1 +1 @@
-
\ No newline at end of file
+
\ No newline at end of file
diff --git a/static/diagrams/temporal-worker-service.svg b/static/diagrams/temporal-worker-service.svg
index e84159a0a6..51044f61ff 100644
--- a/static/diagrams/temporal-worker-service.svg
+++ b/static/diagrams/temporal-worker-service.svg
@@ -1 +1 @@
-
\ No newline at end of file
+
\ No newline at end of file