From 17f10e5443cd81dbb43f138dee3b30c18f25a3a8 Mon Sep 17 00:00:00 2001
From: aws-sdk-cpp-automation GetResourcePolicy
* request.
After a GetResourcePolicy
request returns a policy
- * created using the PutResourcePolicy
request, you can assume the
- * policy will start getting applied in the authorization of requests to the
- * resource. Because this process is eventually consistent, it will take some time
- * to apply the policy to all requests to a resource. Policies that you attach
- * while creating a table using the CreateTable
request will always be
- * applied to all requests for that table.
CreateTable
request will always be applied to all requests for
+ * that table. PutResourcePolicy
is an
* idempotent operation; running it multiple times on the same resource using the
* same policy document will return the same revision ID. If you specify an
- * ExpectedRevisionId
which doesn't match the current policy's
+ * ExpectedRevisionId
that doesn't match the current policy's
* RevisionId
, the PolicyNotFoundException
will be
* returned.
The maximum number of read and write units for the global secondary index
+ * being created. If you use this parameter, you must specify
+ * The maximum number of read and write units for the global secondary index
+ * being created. If you use this parameter, you must specify
+ * The maximum number of read and write units for the global secondary index
+ * being created. If you use this parameter, you must specify
+ * The maximum number of read and write units for the global secondary index
+ * being created. If you use this parameter, you must specify
+ * The maximum number of read and write units for the global secondary index
+ * being created. If you use this parameter, you must specify
+ * The maximum number of read and write units for the global secondary index
+ * being created. If you use this parameter, you must specify
+ * The maximum on-demand throughput settings for the specified replica table
+ * being created. You can only modify The maximum on-demand throughput settings for the specified replica table
+ * being created. You can only modify The maximum on-demand throughput settings for the specified replica table
+ * being created. You can only modify The maximum on-demand throughput settings for the specified replica table
+ * being created. You can only modify The maximum on-demand throughput settings for the specified replica table
+ * being created. You can only modify The maximum on-demand throughput settings for the specified replica table
+ * being created. You can only modify Replica-specific global secondary index settings. An Amazon Web Services resource-based policy document in JSON format that
* will be attached to the table. When you attach a resource-based policy
- * while creating a table, the policy creation is strongly consistent. The maximum size supported for a resource-based policy document is 20 KB.
- * DynamoDB counts whitespaces when calculating the size of a policy against this
- * limit. You can’t request an increase for this limit. For a full list of all
- * considerations that you should keep in mind while attaching a resource-based
- * policy, see strongly
+ * consistent. The maximum size supported for a resource-based policy
+ * document is 20 KB. DynamoDB counts whitespaces when calculating the size of a
+ * policy against this limit. For a full list of all considerations that apply for
+ * resource-based policies, see Resource-based
* policy considerations. An Amazon Web Services resource-based policy document in JSON format that
* will be attached to the table. When you attach a resource-based policy
- * while creating a table, the policy creation is strongly consistent. The maximum size supported for a resource-based policy document is 20 KB.
- * DynamoDB counts whitespaces when calculating the size of a policy against this
- * limit. You can’t request an increase for this limit. For a full list of all
- * considerations that you should keep in mind while attaching a resource-based
- * policy, see strongly
+ * consistent. The maximum size supported for a resource-based policy
+ * document is 20 KB. DynamoDB counts whitespaces when calculating the size of a
+ * policy against this limit. For a full list of all considerations that apply for
+ * resource-based policies, see Resource-based
* policy considerations. An Amazon Web Services resource-based policy document in JSON format that
* will be attached to the table. When you attach a resource-based policy
- * while creating a table, the policy creation is strongly consistent. The maximum size supported for a resource-based policy document is 20 KB.
- * DynamoDB counts whitespaces when calculating the size of a policy against this
- * limit. You can’t request an increase for this limit. For a full list of all
- * considerations that you should keep in mind while attaching a resource-based
- * policy, see strongly
+ * consistent. The maximum size supported for a resource-based policy
+ * document is 20 KB. DynamoDB counts whitespaces when calculating the size of a
+ * policy against this limit. For a full list of all considerations that apply for
+ * resource-based policies, see Resource-based
* policy considerations. An Amazon Web Services resource-based policy document in JSON format that
* will be attached to the table. When you attach a resource-based policy
- * while creating a table, the policy creation is strongly consistent. The maximum size supported for a resource-based policy document is 20 KB.
- * DynamoDB counts whitespaces when calculating the size of a policy against this
- * limit. You can’t request an increase for this limit. For a full list of all
- * considerations that you should keep in mind while attaching a resource-based
- * policy, see strongly
+ * consistent. The maximum size supported for a resource-based policy
+ * document is 20 KB. DynamoDB counts whitespaces when calculating the size of a
+ * policy against this limit. For a full list of all considerations that apply for
+ * resource-based policies, see Resource-based
* policy considerations. An Amazon Web Services resource-based policy document in JSON format that
* will be attached to the table. When you attach a resource-based policy
- * while creating a table, the policy creation is strongly consistent. The maximum size supported for a resource-based policy document is 20 KB.
- * DynamoDB counts whitespaces when calculating the size of a policy against this
- * limit. You can’t request an increase for this limit. For a full list of all
- * considerations that you should keep in mind while attaching a resource-based
- * policy, see strongly
+ * consistent. The maximum size supported for a resource-based policy
+ * document is 20 KB. DynamoDB counts whitespaces when calculating the size of a
+ * policy against this limit. For a full list of all considerations that apply for
+ * resource-based policies, see Resource-based
* policy considerations. An Amazon Web Services resource-based policy document in JSON format that
* will be attached to the table. When you attach a resource-based policy
- * while creating a table, the policy creation is strongly consistent. The maximum size supported for a resource-based policy document is 20 KB.
- * DynamoDB counts whitespaces when calculating the size of a policy against this
- * limit. You can’t request an increase for this limit. For a full list of all
- * considerations that you should keep in mind while attaching a resource-based
- * policy, see strongly
+ * consistent. The maximum size supported for a resource-based policy
+ * document is 20 KB. DynamoDB counts whitespaces when calculating the size of a
+ * policy against this limit. For a full list of all considerations that apply for
+ * resource-based policies, see Resource-based
* policy considerations. An Amazon Web Services resource-based policy document in JSON format that
* will be attached to the table. When you attach a resource-based policy
- * while creating a table, the policy creation is strongly consistent. The maximum size supported for a resource-based policy document is 20 KB.
- * DynamoDB counts whitespaces when calculating the size of a policy against this
- * limit. You can’t request an increase for this limit. For a full list of all
- * considerations that you should keep in mind while attaching a resource-based
- * policy, see strongly
+ * consistent. The maximum size supported for a resource-based policy
+ * document is 20 KB. DynamoDB counts whitespaces when calculating the size of a
+ * policy against this limit. For a full list of all considerations that apply for
+ * resource-based policies, see Resource-based
* policy considerations. An Amazon Web Services resource-based policy document in JSON format that
* will be attached to the table. When you attach a resource-based policy
- * while creating a table, the policy creation is strongly consistent. The maximum size supported for a resource-based policy document is 20 KB.
- * DynamoDB counts whitespaces when calculating the size of a policy against this
- * limit. You can’t request an increase for this limit. For a full list of all
- * considerations that you should keep in mind while attaching a resource-based
- * policy, see strongly
+ * consistent. The maximum size supported for a resource-based policy
+ * document is 20 KB. DynamoDB counts whitespaces when calculating the size of a
+ * policy against this limit. For a full list of all considerations that apply for
+ * resource-based policies, see Resource-based
* policy considerations. Sets the maximum number of read and write units for the specified table in
+ * on-demand capacity mode. If you use this parameter, you must specify
+ * Sets the maximum number of read and write units for the specified table in
+ * on-demand capacity mode. If you use this parameter, you must specify
+ * Sets the maximum number of read and write units for the specified table in
+ * on-demand capacity mode. If you use this parameter, you must specify
+ * Sets the maximum number of read and write units for the specified table in
+ * on-demand capacity mode. If you use this parameter, you must specify
+ * Sets the maximum number of read and write units for the specified table in
+ * on-demand capacity mode. If you use this parameter, you must specify
+ * Sets the maximum number of read and write units for the specified table in
+ * on-demand capacity mode. If you use this parameter, you must specify
+ * A unique string that represents the revision ID of the policy. If you are
+ * A unique string that represents the revision ID of the policy. If you're
* comparing revision IDs, make sure to always use string comparison logic. This value will be empty if you make a request against a resource without a
* policy. A unique string that represents the revision ID of the policy. If you are
+ * A unique string that represents the revision ID of the policy. If you're
* comparing revision IDs, make sure to always use string comparison logic. This value will be empty if you make a request against a resource without a
* policy. A unique string that represents the revision ID of the policy. If you are
+ * A unique string that represents the revision ID of the policy. If you're
* comparing revision IDs, make sure to always use string comparison logic. This value will be empty if you make a request against a resource without a
* policy. A unique string that represents the revision ID of the policy. If you are
+ * A unique string that represents the revision ID of the policy. If you're
* comparing revision IDs, make sure to always use string comparison logic. This value will be empty if you make a request against a resource without a
* policy. A unique string that represents the revision ID of the policy. If you are
+ * A unique string that represents the revision ID of the policy. If you're
* comparing revision IDs, make sure to always use string comparison logic. This value will be empty if you make a request against a resource without a
* policy. A unique string that represents the revision ID of the policy. If you are
+ * A unique string that represents the revision ID of the policy. If you're
* comparing revision IDs, make sure to always use string comparison logic. This value will be empty if you make a request against a resource without a
* policy. A unique string that represents the revision ID of the policy. If you are
+ * A unique string that represents the revision ID of the policy. If you're
* comparing revision IDs, make sure to always use string comparison logic. This value will be empty if you make a request against a resource without a
* policy. A unique string that represents the revision ID of the policy. If you are
+ * A unique string that represents the revision ID of the policy. If you're
* comparing revision IDs, make sure to always use string comparison logic. A unique string that represents the revision ID of the policy. If you are
+ * A unique string that represents the revision ID of the policy. If you're
* comparing revision IDs, make sure to always use string comparison logic. A unique string that represents the revision ID of the policy. If you are
+ * A unique string that represents the revision ID of the policy. If you're
* comparing revision IDs, make sure to always use string comparison logic. A unique string that represents the revision ID of the policy. If you are
+ * A unique string that represents the revision ID of the policy. If you're
* comparing revision IDs, make sure to always use string comparison logic. A unique string that represents the revision ID of the policy. If you are
+ * A unique string that represents the revision ID of the policy. If you're
* comparing revision IDs, make sure to always use string comparison logic. A unique string that represents the revision ID of the policy. If you are
+ * A unique string that represents the revision ID of the policy. If you're
* comparing revision IDs, make sure to always use string comparison logic. A unique string that represents the revision ID of the policy. If you are
+ * A unique string that represents the revision ID of the policy. If you're
* comparing revision IDs, make sure to always use string comparison logic. The maximum number of read and write units for the specified global secondary
+ * index. If you use this parameter, you must specify
+ * The maximum number of read and write units for the specified global secondary
+ * index. If you use this parameter, you must specify
+ * The maximum number of read and write units for the specified global secondary
+ * index. If you use this parameter, you must specify
+ * The maximum number of read and write units for the specified global secondary
+ * index. If you use this parameter, you must specify
+ * The maximum number of read and write units for the specified global secondary
+ * index. If you use this parameter, you must specify
+ * The maximum number of read and write units for the specified global secondary
+ * index. If you use this parameter, you must specify
+ * The maximum number of read and write units for the specified global secondary
+ * index. If you use this parameter, you must specify
+ * The maximum number of read and write units for the specified global secondary
+ * index. If you use this parameter, you must specify
+ * The maximum number of read and write units for the specified global secondary
+ * index. If you use this parameter, you must specify
+ * The maximum number of read and write units for the specified global secondary
+ * index. If you use this parameter, you must specify
+ * The maximum number of read and write units for the specified global secondary
+ * index. If you use this parameter, you must specify
+ * The maximum number of read and write units for the specified global secondary
+ * index. If you use this parameter, you must specify
+ * Sets the maximum number of read and write units for the specified on-demand
+ * table. If you use this parameter, you must specify
+ * PutResourcePolicy
is an asynchronous
* operation. If you issue a GetResourcePolicy
request immediately
diff --git a/generated/src/aws-cpp-sdk-dynamodb/include/aws/dynamodb/model/CreateGlobalSecondaryIndexAction.h b/generated/src/aws-cpp-sdk-dynamodb/include/aws/dynamodb/model/CreateGlobalSecondaryIndexAction.h
index d26834882eb..97df5c27a58 100644
--- a/generated/src/aws-cpp-sdk-dynamodb/include/aws/dynamodb/model/CreateGlobalSecondaryIndexAction.h
+++ b/generated/src/aws-cpp-sdk-dynamodb/include/aws/dynamodb/model/CreateGlobalSecondaryIndexAction.h
@@ -9,6 +9,7 @@
#include MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.MaxReadRequestUnits
, because you
+ * can't modify MaxWriteRequestUnits
for individual replica tables.
+ * MaxReadRequestUnits
, because you
+ * can't modify MaxWriteRequestUnits
for individual replica tables.
+ * MaxReadRequestUnits
, because you
+ * can't modify MaxWriteRequestUnits
for individual replica tables.
+ * MaxReadRequestUnits
, because you
+ * can't modify MaxWriteRequestUnits
for individual replica tables.
+ * MaxReadRequestUnits
, because you
+ * can't modify MaxWriteRequestUnits
for individual replica tables.
+ * MaxReadRequestUnits
, because you
+ * can't modify MaxWriteRequestUnits
for individual replica tables.
+ * MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.See Also:
AWS
+ * API Reference
Maximum number of read request units for the specified table.
To
+ * specify a maximum OnDemandThroughput
on your table, set the value
+ * of MaxReadRequestUnits
as greater than or equal to 1. To remove the
+ * maximum OnDemandThroughput
that is currently set on your table, set
+ * the value of MaxReadRequestUnits
to -1.
Maximum number of read request units for the specified table.
To
+ * specify a maximum OnDemandThroughput
on your table, set the value
+ * of MaxReadRequestUnits
as greater than or equal to 1. To remove the
+ * maximum OnDemandThroughput
that is currently set on your table, set
+ * the value of MaxReadRequestUnits
to -1.
Maximum number of read request units for the specified table.
To
+ * specify a maximum OnDemandThroughput
on your table, set the value
+ * of MaxReadRequestUnits
as greater than or equal to 1. To remove the
+ * maximum OnDemandThroughput
that is currently set on your table, set
+ * the value of MaxReadRequestUnits
to -1.
Maximum number of read request units for the specified table.
To
+ * specify a maximum OnDemandThroughput
on your table, set the value
+ * of MaxReadRequestUnits
as greater than or equal to 1. To remove the
+ * maximum OnDemandThroughput
that is currently set on your table, set
+ * the value of MaxReadRequestUnits
to -1.
Maximum number of write request units for the specified table.
To
+ * specify a maximum OnDemandThroughput
on your table, set the value
+ * of MaxWriteRequestUnits
as greater than or equal to 1. To remove
+ * the maximum OnDemandThroughput
that is currently set on your table,
+ * set the value of MaxWriteRequestUnits
to -1.
Maximum number of write request units for the specified table.
To
+ * specify a maximum OnDemandThroughput
on your table, set the value
+ * of MaxWriteRequestUnits
as greater than or equal to 1. To remove
+ * the maximum OnDemandThroughput
that is currently set on your table,
+ * set the value of MaxWriteRequestUnits
to -1.
Maximum number of write request units for the specified table.
To
+ * specify a maximum OnDemandThroughput
on your table, set the value
+ * of MaxWriteRequestUnits
as greater than or equal to 1. To remove
+ * the maximum OnDemandThroughput
that is currently set on your table,
+ * set the value of MaxWriteRequestUnits
to -1.
Maximum number of write request units for the specified table.
To
+ * specify a maximum OnDemandThroughput
on your table, set the value
+ * of MaxWriteRequestUnits
as greater than or equal to 1. To remove
+ * the maximum OnDemandThroughput
that is currently set on your table,
+ * set the value of MaxWriteRequestUnits
to -1.
Overrides the on-demand throughput settings for this replica table. If you + * don't specify a value for this parameter, it uses the source table's on-demand + * throughput settings.
Maximum number of read request units for the specified replica table.
+ */ + inline long long GetMaxReadRequestUnits() const{ return m_maxReadRequestUnits; } + + /** + *Maximum number of read request units for the specified replica table.
+ */ + inline bool MaxReadRequestUnitsHasBeenSet() const { return m_maxReadRequestUnitsHasBeenSet; } + + /** + *Maximum number of read request units for the specified replica table.
+ */ + inline void SetMaxReadRequestUnits(long long value) { m_maxReadRequestUnitsHasBeenSet = true; m_maxReadRequestUnits = value; } + + /** + *Maximum number of read request units for the specified replica table.
+ */ + inline OnDemandThroughputOverride& WithMaxReadRequestUnits(long long value) { SetMaxReadRequestUnits(value); return *this;} + + private: + + long long m_maxReadRequestUnits; + bool m_maxReadRequestUnitsHasBeenSet = false; + }; + +} // namespace Model +} // namespace DynamoDB +} // namespace Aws diff --git a/generated/src/aws-cpp-sdk-dynamodb/include/aws/dynamodb/model/PutResourcePolicyRequest.h b/generated/src/aws-cpp-sdk-dynamodb/include/aws/dynamodb/model/PutResourcePolicyRequest.h index 57c04729520..c031f81c423 100644 --- a/generated/src/aws-cpp-sdk-dynamodb/include/aws/dynamodb/model/PutResourcePolicyRequest.h +++ b/generated/src/aws-cpp-sdk-dynamodb/include/aws/dynamodb/model/PutResourcePolicyRequest.h @@ -133,10 +133,13 @@ namespace Model /** *An Amazon Web Services resource-based policy document in JSON format.
- *The maximum size supported for a resource-based policy document is 20 KB.
- * DynamoDB counts whitespaces when calculating the size of a policy against this
- * limit. For a full list of all considerations that you should keep in mind while
- * attaching a resource-based policy, see The maximum size supported for a resource-based policy document is
+ * 20 KB. DynamoDB counts whitespaces when calculating the size of a policy against
+ * this limit. Within a resource-based policy, if the action for
+ * a DynamoDB service-linked role (SLR) to replicate data for a global table is
+ * denied, adding or deleting a replica will fail with an error. For a full list of all considerations that apply while attaching a
+ * resource-based policy, see Resource-based
* policy considerations. An Amazon Web Services resource-based policy document in JSON format. The maximum size supported for a resource-based policy document is 20 KB.
- * DynamoDB counts whitespaces when calculating the size of a policy against this
- * limit. For a full list of all considerations that you should keep in mind while
- * attaching a resource-based policy, see The maximum size supported for a resource-based policy document is
+ * 20 KB. DynamoDB counts whitespaces when calculating the size of a policy against
+ * this limit. Within a resource-based policy, if the action for
+ * a DynamoDB service-linked role (SLR) to replicate data for a global table is
+ * denied, adding or deleting a replica will fail with an error. For a full list of all considerations that apply while attaching a
+ * resource-based policy, see Resource-based
* policy considerations. An Amazon Web Services resource-based policy document in JSON format. The maximum size supported for a resource-based policy document is 20 KB.
- * DynamoDB counts whitespaces when calculating the size of a policy against this
- * limit. For a full list of all considerations that you should keep in mind while
- * attaching a resource-based policy, see The maximum size supported for a resource-based policy document is
+ * 20 KB. DynamoDB counts whitespaces when calculating the size of a policy against
+ * this limit. Within a resource-based policy, if the action for
+ * a DynamoDB service-linked role (SLR) to replicate data for a global table is
+ * denied, adding or deleting a replica will fail with an error. For a full list of all considerations that apply while attaching a
+ * resource-based policy, see Resource-based
* policy considerations. An Amazon Web Services resource-based policy document in JSON format. The maximum size supported for a resource-based policy document is 20 KB.
- * DynamoDB counts whitespaces when calculating the size of a policy against this
- * limit. For a full list of all considerations that you should keep in mind while
- * attaching a resource-based policy, see The maximum size supported for a resource-based policy document is
+ * 20 KB. DynamoDB counts whitespaces when calculating the size of a policy against
+ * this limit. Within a resource-based policy, if the action for
+ * a DynamoDB service-linked role (SLR) to replicate data for a global table is
+ * denied, adding or deleting a replica will fail with an error. For a full list of all considerations that apply while attaching a
+ * resource-based policy, see Resource-based
* policy considerations. An Amazon Web Services resource-based policy document in JSON format. The maximum size supported for a resource-based policy document is 20 KB.
- * DynamoDB counts whitespaces when calculating the size of a policy against this
- * limit. For a full list of all considerations that you should keep in mind while
- * attaching a resource-based policy, see The maximum size supported for a resource-based policy document is
+ * 20 KB. DynamoDB counts whitespaces when calculating the size of a policy against
+ * this limit. Within a resource-based policy, if the action for
+ * a DynamoDB service-linked role (SLR) to replicate data for a global table is
+ * denied, adding or deleting a replica will fail with an error. For a full list of all considerations that apply while attaching a
+ * resource-based policy, see Resource-based
* policy considerations. An Amazon Web Services resource-based policy document in JSON format. The maximum size supported for a resource-based policy document is 20 KB.
- * DynamoDB counts whitespaces when calculating the size of a policy against this
- * limit. For a full list of all considerations that you should keep in mind while
- * attaching a resource-based policy, see The maximum size supported for a resource-based policy document is
+ * 20 KB. DynamoDB counts whitespaces when calculating the size of a policy against
+ * this limit. Within a resource-based policy, if the action for
+ * a DynamoDB service-linked role (SLR) to replicate data for a global table is
+ * denied, adding or deleting a replica will fail with an error. For a full list of all considerations that apply while attaching a
+ * resource-based policy, see Resource-based
* policy considerations. An Amazon Web Services resource-based policy document in JSON format. The maximum size supported for a resource-based policy document is 20 KB.
- * DynamoDB counts whitespaces when calculating the size of a policy against this
- * limit. For a full list of all considerations that you should keep in mind while
- * attaching a resource-based policy, see The maximum size supported for a resource-based policy document is
+ * 20 KB. DynamoDB counts whitespaces when calculating the size of a policy against
+ * this limit. Within a resource-based policy, if the action for
+ * a DynamoDB service-linked role (SLR) to replicate data for a global table is
+ * denied, adding or deleting a replica will fail with an error. For a full list of all considerations that apply while attaching a
+ * resource-based policy, see Resource-based
* policy considerations. An Amazon Web Services resource-based policy document in JSON format. The maximum size supported for a resource-based policy document is 20 KB.
- * DynamoDB counts whitespaces when calculating the size of a policy against this
- * limit. For a full list of all considerations that you should keep in mind while
- * attaching a resource-based policy, see The maximum size supported for a resource-based policy document is
+ * 20 KB. DynamoDB counts whitespaces when calculating the size of a policy against
+ * this limit. Within a resource-based policy, if the action for
+ * a DynamoDB service-linked role (SLR) to replicate data for a global table is
+ * denied, adding or deleting a replica will fail with an error. For a full list of all considerations that apply while attaching a
+ * resource-based policy, see Resource-based
* policy considerations. A string value that you can use to conditionally update your policy. You can
* provide the revision ID of your existing policy to make mutating requests
- * against that policy. When you provide an expected revision ID, if the revision
- * ID of the existing policy on the resource doesn't match or if there's no policy
- * attached to the resource, your request will be rejected with a
- * To conditionally put a policy when
- * no policy exists for the resource, specify When you provide an expected revision ID, if
+ * the revision ID of the existing policy on the resource doesn't match or if
+ * there's no policy attached to the resource, your request will be rejected with a
+ * To conditionally attach a
+ * policy when no policy exists for the resource, specify A string value that you can use to conditionally update your policy. You can
* provide the revision ID of your existing policy to make mutating requests
- * against that policy. When you provide an expected revision ID, if the revision
- * ID of the existing policy on the resource doesn't match or if there's no policy
- * attached to the resource, your request will be rejected with a
- * To conditionally put a policy when
- * no policy exists for the resource, specify When you provide an expected revision ID, if
+ * the revision ID of the existing policy on the resource doesn't match or if
+ * there's no policy attached to the resource, your request will be rejected with a
+ * To conditionally attach a
+ * policy when no policy exists for the resource, specify A string value that you can use to conditionally update your policy. You can
* provide the revision ID of your existing policy to make mutating requests
- * against that policy. When you provide an expected revision ID, if the revision
- * ID of the existing policy on the resource doesn't match or if there's no policy
- * attached to the resource, your request will be rejected with a
- * To conditionally put a policy when
- * no policy exists for the resource, specify When you provide an expected revision ID, if
+ * the revision ID of the existing policy on the resource doesn't match or if
+ * there's no policy attached to the resource, your request will be rejected with a
+ * To conditionally attach a
+ * policy when no policy exists for the resource, specify A string value that you can use to conditionally update your policy. You can
* provide the revision ID of your existing policy to make mutating requests
- * against that policy. When you provide an expected revision ID, if the revision
- * ID of the existing policy on the resource doesn't match or if there's no policy
- * attached to the resource, your request will be rejected with a
- * To conditionally put a policy when
- * no policy exists for the resource, specify When you provide an expected revision ID, if
+ * the revision ID of the existing policy on the resource doesn't match or if
+ * there's no policy attached to the resource, your request will be rejected with a
+ * To conditionally attach a
+ * policy when no policy exists for the resource, specify A string value that you can use to conditionally update your policy. You can
* provide the revision ID of your existing policy to make mutating requests
- * against that policy. When you provide an expected revision ID, if the revision
- * ID of the existing policy on the resource doesn't match or if there's no policy
- * attached to the resource, your request will be rejected with a
- * To conditionally put a policy when
- * no policy exists for the resource, specify When you provide an expected revision ID, if
+ * the revision ID of the existing policy on the resource doesn't match or if
+ * there's no policy attached to the resource, your request will be rejected with a
+ * To conditionally attach a
+ * policy when no policy exists for the resource, specify A string value that you can use to conditionally update your policy. You can
* provide the revision ID of your existing policy to make mutating requests
- * against that policy. When you provide an expected revision ID, if the revision
- * ID of the existing policy on the resource doesn't match or if there's no policy
- * attached to the resource, your request will be rejected with a
- * To conditionally put a policy when
- * no policy exists for the resource, specify When you provide an expected revision ID, if
+ * the revision ID of the existing policy on the resource doesn't match or if
+ * there's no policy attached to the resource, your request will be rejected with a
+ * To conditionally attach a
+ * policy when no policy exists for the resource, specify A string value that you can use to conditionally update your policy. You can
* provide the revision ID of your existing policy to make mutating requests
- * against that policy. When you provide an expected revision ID, if the revision
- * ID of the existing policy on the resource doesn't match or if there's no policy
- * attached to the resource, your request will be rejected with a
- * To conditionally put a policy when
- * no policy exists for the resource, specify When you provide an expected revision ID, if
+ * the revision ID of the existing policy on the resource doesn't match or if
+ * there's no policy attached to the resource, your request will be rejected with a
+ * To conditionally attach a
+ * policy when no policy exists for the resource, specify A string value that you can use to conditionally update your policy. You can
* provide the revision ID of your existing policy to make mutating requests
- * against that policy. When you provide an expected revision ID, if the revision
- * ID of the existing policy on the resource doesn't match or if there's no policy
- * attached to the resource, your request will be rejected with a
- * To conditionally put a policy when
- * no policy exists for the resource, specify PolicyNotFoundException
.NO_POLICY
for the
- * revision ID.PolicyNotFoundException
.NO_POLICY
+ * for the revision ID.PolicyNotFoundException
.NO_POLICY
for the
- * revision ID.PolicyNotFoundException
.NO_POLICY
+ * for the revision ID.PolicyNotFoundException
.NO_POLICY
for the
- * revision ID.PolicyNotFoundException
.NO_POLICY
+ * for the revision ID.PolicyNotFoundException
.NO_POLICY
for the
- * revision ID.PolicyNotFoundException
.NO_POLICY
+ * for the revision ID.PolicyNotFoundException
.NO_POLICY
for the
- * revision ID.PolicyNotFoundException
.NO_POLICY
+ * for the revision ID.PolicyNotFoundException
.NO_POLICY
for the
- * revision ID.PolicyNotFoundException
.NO_POLICY
+ * for the revision ID.PolicyNotFoundException
.NO_POLICY
for the
- * revision ID.PolicyNotFoundException
.NO_POLICY
+ * for the revision ID.PolicyNotFoundException
.NO_POLICY
for the
- * revision ID.
When you provide an expected revision ID, if
+ * the revision ID of the existing policy on the resource doesn't match or if
+ * there's no policy attached to the resource, your request will be rejected with a
+ * PolicyNotFoundException
.
To conditionally attach a
+ * policy when no policy exists for the resource, specify NO_POLICY
+ * for the revision ID.
A unique string that represents the revision ID of the policy. If you are + *
A unique string that represents the revision ID of the policy. If you're * comparing revision IDs, make sure to always use string comparison logic.
*/ inline const Aws::String& GetRevisionId() const{ return m_revisionId; } /** - *A unique string that represents the revision ID of the policy. If you are + *
A unique string that represents the revision ID of the policy. If you're * comparing revision IDs, make sure to always use string comparison logic.
*/ inline void SetRevisionId(const Aws::String& value) { m_revisionId = value; } /** - *A unique string that represents the revision ID of the policy. If you are + *
A unique string that represents the revision ID of the policy. If you're * comparing revision IDs, make sure to always use string comparison logic.
*/ inline void SetRevisionId(Aws::String&& value) { m_revisionId = std::move(value); } /** - *A unique string that represents the revision ID of the policy. If you are + *
A unique string that represents the revision ID of the policy. If you're * comparing revision IDs, make sure to always use string comparison logic.
*/ inline void SetRevisionId(const char* value) { m_revisionId.assign(value); } /** - *A unique string that represents the revision ID of the policy. If you are + *
A unique string that represents the revision ID of the policy. If you're * comparing revision IDs, make sure to always use string comparison logic.
*/ inline PutResourcePolicyResult& WithRevisionId(const Aws::String& value) { SetRevisionId(value); return *this;} /** - *A unique string that represents the revision ID of the policy. If you are + *
A unique string that represents the revision ID of the policy. If you're * comparing revision IDs, make sure to always use string comparison logic.
*/ inline PutResourcePolicyResult& WithRevisionId(Aws::String&& value) { SetRevisionId(std::move(value)); return *this;} /** - *A unique string that represents the revision ID of the policy. If you are + *
A unique string that represents the revision ID of the policy. If you're * comparing revision IDs, make sure to always use string comparison logic.
*/ inline PutResourcePolicyResult& WithRevisionId(const char* value) { SetRevisionId(value); return *this;} diff --git a/generated/src/aws-cpp-sdk-dynamodb/include/aws/dynamodb/model/ReplicaDescription.h b/generated/src/aws-cpp-sdk-dynamodb/include/aws/dynamodb/model/ReplicaDescription.h index f28f92578b3..bd12d2a4e96 100644 --- a/generated/src/aws-cpp-sdk-dynamodb/include/aws/dynamodb/model/ReplicaDescription.h +++ b/generated/src/aws-cpp-sdk-dynamodb/include/aws/dynamodb/model/ReplicaDescription.h @@ -8,6 +8,7 @@ #includeOverrides the maximum on-demand throughput settings for the specified replica + * table.
+ */ + inline const OnDemandThroughputOverride& GetOnDemandThroughputOverride() const{ return m_onDemandThroughputOverride; } + + /** + *Overrides the maximum on-demand throughput settings for the specified replica + * table.
+ */ + inline bool OnDemandThroughputOverrideHasBeenSet() const { return m_onDemandThroughputOverrideHasBeenSet; } + + /** + *Overrides the maximum on-demand throughput settings for the specified replica + * table.
+ */ + inline void SetOnDemandThroughputOverride(const OnDemandThroughputOverride& value) { m_onDemandThroughputOverrideHasBeenSet = true; m_onDemandThroughputOverride = value; } + + /** + *Overrides the maximum on-demand throughput settings for the specified replica + * table.
+ */ + inline void SetOnDemandThroughputOverride(OnDemandThroughputOverride&& value) { m_onDemandThroughputOverrideHasBeenSet = true; m_onDemandThroughputOverride = std::move(value); } + + /** + *Overrides the maximum on-demand throughput settings for the specified replica + * table.
+ */ + inline ReplicaDescription& WithOnDemandThroughputOverride(const OnDemandThroughputOverride& value) { SetOnDemandThroughputOverride(value); return *this;} + + /** + *Overrides the maximum on-demand throughput settings for the specified replica + * table.
+ */ + inline ReplicaDescription& WithOnDemandThroughputOverride(OnDemandThroughputOverride&& value) { SetOnDemandThroughputOverride(std::move(value)); return *this;} + + /** *Replica-specific global secondary index settings.
*/ @@ -483,6 +521,9 @@ namespace Model ProvisionedThroughputOverride m_provisionedThroughputOverride; bool m_provisionedThroughputOverrideHasBeenSet = false; + OnDemandThroughputOverride m_onDemandThroughputOverride; + bool m_onDemandThroughputOverrideHasBeenSet = false; + Aws::VectorOverrides the maximum on-demand throughput settings for the specified global + * secondary index in the specified replica table.
+ */ + inline const OnDemandThroughputOverride& GetOnDemandThroughputOverride() const{ return m_onDemandThroughputOverride; } + + /** + *Overrides the maximum on-demand throughput settings for the specified global + * secondary index in the specified replica table.
+ */ + inline bool OnDemandThroughputOverrideHasBeenSet() const { return m_onDemandThroughputOverrideHasBeenSet; } + + /** + *Overrides the maximum on-demand throughput settings for the specified global + * secondary index in the specified replica table.
+ */ + inline void SetOnDemandThroughputOverride(const OnDemandThroughputOverride& value) { m_onDemandThroughputOverrideHasBeenSet = true; m_onDemandThroughputOverride = value; } + + /** + *Overrides the maximum on-demand throughput settings for the specified global + * secondary index in the specified replica table.
+ */ + inline void SetOnDemandThroughputOverride(OnDemandThroughputOverride&& value) { m_onDemandThroughputOverrideHasBeenSet = true; m_onDemandThroughputOverride = std::move(value); } + + /** + *Overrides the maximum on-demand throughput settings for the specified global + * secondary index in the specified replica table.
+ */ + inline ReplicaGlobalSecondaryIndex& WithOnDemandThroughputOverride(const OnDemandThroughputOverride& value) { SetOnDemandThroughputOverride(value); return *this;} + + /** + *Overrides the maximum on-demand throughput settings for the specified global + * secondary index in the specified replica table.
+ */ + inline ReplicaGlobalSecondaryIndex& WithOnDemandThroughputOverride(OnDemandThroughputOverride&& value) { SetOnDemandThroughputOverride(std::move(value)); return *this;} + private: Aws::String m_indexName; @@ -123,6 +161,9 @@ namespace Model ProvisionedThroughputOverride m_provisionedThroughputOverride; bool m_provisionedThroughputOverrideHasBeenSet = false; + + OnDemandThroughputOverride m_onDemandThroughputOverride; + bool m_onDemandThroughputOverrideHasBeenSet = false; }; } // namespace Model diff --git a/generated/src/aws-cpp-sdk-dynamodb/include/aws/dynamodb/model/ReplicaGlobalSecondaryIndexDescription.h b/generated/src/aws-cpp-sdk-dynamodb/include/aws/dynamodb/model/ReplicaGlobalSecondaryIndexDescription.h index de2bfafd81a..1b84803d740 100644 --- a/generated/src/aws-cpp-sdk-dynamodb/include/aws/dynamodb/model/ReplicaGlobalSecondaryIndexDescription.h +++ b/generated/src/aws-cpp-sdk-dynamodb/include/aws/dynamodb/model/ReplicaGlobalSecondaryIndexDescription.h @@ -7,6 +7,7 @@ #includeOverrides the maximum on-demand throughput for the specified global secondary + * index in the specified replica table.
+ */ + inline const OnDemandThroughputOverride& GetOnDemandThroughputOverride() const{ return m_onDemandThroughputOverride; } + + /** + *Overrides the maximum on-demand throughput for the specified global secondary + * index in the specified replica table.
+ */ + inline bool OnDemandThroughputOverrideHasBeenSet() const { return m_onDemandThroughputOverrideHasBeenSet; } + + /** + *Overrides the maximum on-demand throughput for the specified global secondary + * index in the specified replica table.
+ */ + inline void SetOnDemandThroughputOverride(const OnDemandThroughputOverride& value) { m_onDemandThroughputOverrideHasBeenSet = true; m_onDemandThroughputOverride = value; } + + /** + *Overrides the maximum on-demand throughput for the specified global secondary + * index in the specified replica table.
+ */ + inline void SetOnDemandThroughputOverride(OnDemandThroughputOverride&& value) { m_onDemandThroughputOverrideHasBeenSet = true; m_onDemandThroughputOverride = std::move(value); } + + /** + *Overrides the maximum on-demand throughput for the specified global secondary + * index in the specified replica table.
+ */ + inline ReplicaGlobalSecondaryIndexDescription& WithOnDemandThroughputOverride(const OnDemandThroughputOverride& value) { SetOnDemandThroughputOverride(value); return *this;} + + /** + *Overrides the maximum on-demand throughput for the specified global secondary + * index in the specified replica table.
+ */ + inline ReplicaGlobalSecondaryIndexDescription& WithOnDemandThroughputOverride(OnDemandThroughputOverride&& value) { SetOnDemandThroughputOverride(std::move(value)); return *this;} + private: Aws::String m_indexName; @@ -117,6 +155,9 @@ namespace Model ProvisionedThroughputOverride m_provisionedThroughputOverride; bool m_provisionedThroughputOverrideHasBeenSet = false; + + OnDemandThroughputOverride m_onDemandThroughputOverride; + bool m_onDemandThroughputOverrideHasBeenSet = false; }; } // namespace Model diff --git a/generated/src/aws-cpp-sdk-dynamodb/include/aws/dynamodb/model/RestoreTableFromBackupRequest.h b/generated/src/aws-cpp-sdk-dynamodb/include/aws/dynamodb/model/RestoreTableFromBackupRequest.h index 3af5987df15..47d42a0f813 100644 --- a/generated/src/aws-cpp-sdk-dynamodb/include/aws/dynamodb/model/RestoreTableFromBackupRequest.h +++ b/generated/src/aws-cpp-sdk-dynamodb/include/aws/dynamodb/model/RestoreTableFromBackupRequest.h @@ -10,6 +10,7 @@ #includeThe new server-side encryption settings for the restored table.
*/ @@ -348,6 +368,9 @@ namespace Model ProvisionedThroughput m_provisionedThroughputOverride; bool m_provisionedThroughputOverrideHasBeenSet = false; + OnDemandThroughput m_onDemandThroughputOverride; + bool m_onDemandThroughputOverrideHasBeenSet = false; + SSESpecification m_sSESpecificationOverride; bool m_sSESpecificationOverrideHasBeenSet = false; }; diff --git a/generated/src/aws-cpp-sdk-dynamodb/include/aws/dynamodb/model/RestoreTableToPointInTimeRequest.h b/generated/src/aws-cpp-sdk-dynamodb/include/aws/dynamodb/model/RestoreTableToPointInTimeRequest.h index 91a54682bd2..6abea8f255f 100644 --- a/generated/src/aws-cpp-sdk-dynamodb/include/aws/dynamodb/model/RestoreTableToPointInTimeRequest.h +++ b/generated/src/aws-cpp-sdk-dynamodb/include/aws/dynamodb/model/RestoreTableToPointInTimeRequest.h @@ -11,6 +11,7 @@ #includeThe new server-side encryption settings for the restored table.
*/ @@ -467,6 +487,9 @@ namespace Model ProvisionedThroughput m_provisionedThroughputOverride; bool m_provisionedThroughputOverrideHasBeenSet = false; + OnDemandThroughput m_onDemandThroughputOverride; + bool m_onDemandThroughputOverrideHasBeenSet = false; + SSESpecification m_sSESpecificationOverride; bool m_sSESpecificationOverrideHasBeenSet = false; }; diff --git a/generated/src/aws-cpp-sdk-dynamodb/include/aws/dynamodb/model/SourceTableDetails.h b/generated/src/aws-cpp-sdk-dynamodb/include/aws/dynamodb/model/SourceTableDetails.h index bac7b36c037..5c3a17eef56 100644 --- a/generated/src/aws-cpp-sdk-dynamodb/include/aws/dynamodb/model/SourceTableDetails.h +++ b/generated/src/aws-cpp-sdk-dynamodb/include/aws/dynamodb/model/SourceTableDetails.h @@ -9,6 +9,7 @@ #includeNumber of items in the table. Note that this is an approximate value.
*/ @@ -400,6 +420,9 @@ namespace Model ProvisionedThroughput m_provisionedThroughput; bool m_provisionedThroughputHasBeenSet = false; + OnDemandThroughput m_onDemandThroughput; + bool m_onDemandThroughputHasBeenSet = false; + long long m_itemCount; bool m_itemCountHasBeenSet = false; diff --git a/generated/src/aws-cpp-sdk-dynamodb/include/aws/dynamodb/model/TableCreationParameters.h b/generated/src/aws-cpp-sdk-dynamodb/include/aws/dynamodb/model/TableCreationParameters.h index d852d3b5e21..799041103d2 100644 --- a/generated/src/aws-cpp-sdk-dynamodb/include/aws/dynamodb/model/TableCreationParameters.h +++ b/generated/src/aws-cpp-sdk-dynamodb/include/aws/dynamodb/model/TableCreationParameters.h @@ -9,6 +9,7 @@ #includeThe maximum number of read and write units for the specified on-demand table.
+ * If you use this parameter, you must specify MaxReadRequestUnits
,
+ * MaxWriteRequestUnits
, or both.
The maximum number of read and write units for the specified on-demand table.
+ * If you use this parameter, you must specify MaxReadRequestUnits
,
+ * MaxWriteRequestUnits
, or both.
The maximum number of read and write units for the specified on-demand table.
+ * If you use this parameter, you must specify MaxReadRequestUnits
,
+ * MaxWriteRequestUnits
, or both.
The maximum number of read and write units for the specified on-demand table.
+ * If you use this parameter, you must specify MaxReadRequestUnits
,
+ * MaxWriteRequestUnits
, or both.
The maximum number of read and write units for the specified on-demand table.
+ * If you use this parameter, you must specify MaxReadRequestUnits
,
+ * MaxWriteRequestUnits
, or both.
The maximum number of read and write units for the specified on-demand table.
+ * If you use this parameter, you must specify MaxReadRequestUnits
,
+ * MaxWriteRequestUnits
, or both.
Updates the maximum number of read and write units for the specified global
+ * secondary index. If you use this parameter, you must specify
+ * MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.
Updates the maximum number of read and write units for the specified global
+ * secondary index. If you use this parameter, you must specify
+ * MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.
Updates the maximum number of read and write units for the specified global
+ * secondary index. If you use this parameter, you must specify
+ * MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.
Updates the maximum number of read and write units for the specified global
+ * secondary index. If you use this parameter, you must specify
+ * MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.
Updates the maximum number of read and write units for the specified global
+ * secondary index. If you use this parameter, you must specify
+ * MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.
Updates the maximum number of read and write units for the specified global
+ * secondary index. If you use this parameter, you must specify
+ * MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.
Overrides the maximum on-demand throughput for the replica table.
+ */ + inline const OnDemandThroughputOverride& GetOnDemandThroughputOverride() const{ return m_onDemandThroughputOverride; } + + /** + *Overrides the maximum on-demand throughput for the replica table.
+ */ + inline bool OnDemandThroughputOverrideHasBeenSet() const { return m_onDemandThroughputOverrideHasBeenSet; } + + /** + *Overrides the maximum on-demand throughput for the replica table.
+ */ + inline void SetOnDemandThroughputOverride(const OnDemandThroughputOverride& value) { m_onDemandThroughputOverrideHasBeenSet = true; m_onDemandThroughputOverride = value; } + + /** + *Overrides the maximum on-demand throughput for the replica table.
+ */ + inline void SetOnDemandThroughputOverride(OnDemandThroughputOverride&& value) { m_onDemandThroughputOverrideHasBeenSet = true; m_onDemandThroughputOverride = std::move(value); } + + /** + *Overrides the maximum on-demand throughput for the replica table.
+ */ + inline UpdateReplicationGroupMemberAction& WithOnDemandThroughputOverride(const OnDemandThroughputOverride& value) { SetOnDemandThroughputOverride(value); return *this;} + + /** + *Overrides the maximum on-demand throughput for the replica table.
+ */ + inline UpdateReplicationGroupMemberAction& WithOnDemandThroughputOverride(OnDemandThroughputOverride&& value) { SetOnDemandThroughputOverride(std::move(value)); return *this;} + + /** *Replica-specific global secondary index settings.
*/ @@ -272,6 +304,9 @@ namespace Model ProvisionedThroughputOverride m_provisionedThroughputOverride; bool m_provisionedThroughputOverrideHasBeenSet = false; + OnDemandThroughputOverride m_onDemandThroughputOverride; + bool m_onDemandThroughputOverrideHasBeenSet = false; + Aws::VectorUpdates the maximum number of read and write units for the specified table in
+ * on-demand capacity mode. If you use this parameter, you must specify
+ * MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.
Updates the maximum number of read and write units for the specified table in
+ * on-demand capacity mode. If you use this parameter, you must specify
+ * MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.
Updates the maximum number of read and write units for the specified table in
+ * on-demand capacity mode. If you use this parameter, you must specify
+ * MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.
Updates the maximum number of read and write units for the specified table in
+ * on-demand capacity mode. If you use this parameter, you must specify
+ * MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.
Updates the maximum number of read and write units for the specified table in
+ * on-demand capacity mode. If you use this parameter, you must specify
+ * MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.
Updates the maximum number of read and write units for the specified table in
+ * on-demand capacity mode. If you use this parameter, you must specify
+ * MaxReadRequestUnits
, MaxWriteRequestUnits
, or
+ * both.
Gets the public endorsement key associated with the Nitro Trusted Platform + * Module (NitroTPM) for the specified instance.
Returns a list of instance types with the specified instance attributes. You
* can use the response to preview the instance types without launching instances.
diff --git a/generated/src/aws-cpp-sdk-ec2/include/aws/ec2/EC2ServiceClientModel.h b/generated/src/aws-cpp-sdk-ec2/include/aws/ec2/EC2ServiceClientModel.h
index 5eb423c785e..ae968333a16 100644
--- a/generated/src/aws-cpp-sdk-ec2/include/aws/ec2/EC2ServiceClientModel.h
+++ b/generated/src/aws-cpp-sdk-ec2/include/aws/ec2/EC2ServiceClientModel.h
@@ -426,6 +426,7 @@
#include The ID of the instance for which to get the public endorsement key. The ID of the instance for which to get the public endorsement key. The ID of the instance for which to get the public endorsement key. The ID of the instance for which to get the public endorsement key. The ID of the instance for which to get the public endorsement key. The ID of the instance for which to get the public endorsement key. The ID of the instance for which to get the public endorsement key. The ID of the instance for which to get the public endorsement key. The required public endorsement key type. The required public endorsement key type. The required public endorsement key type. The required public endorsement key type. The required public endorsement key type. The required public endorsement key type. The required public endorsement key format. Specify The required public endorsement key format. Specify The required public endorsement key format. Specify The required public endorsement key format. Specify The required public endorsement key format. Specify The required public endorsement key format. Specify Specify this parameter to verify whether the request will succeed, without
+ * actually making the request. If the request will succeed, the response is
+ * Specify this parameter to verify whether the request will succeed, without
+ * actually making the request. If the request will succeed, the response is
+ * Specify this parameter to verify whether the request will succeed, without
+ * actually making the request. If the request will succeed, the response is
+ * Specify this parameter to verify whether the request will succeed, without
+ * actually making the request. If the request will succeed, the response is
+ * The ID of the instance. The ID of the instance. The ID of the instance. The ID of the instance. The ID of the instance. The ID of the instance. The ID of the instance. The public endorsement key type. The public endorsement key type. The public endorsement key type. The public endorsement key type. The public endorsement key type. The public endorsement key format. The public endorsement key format. The public endorsement key format. The public endorsement key format. The public endorsement key format. The public endorsement key material. The public endorsement key material. The public endorsement key material. The public endorsement key material. The public endorsement key material. The public endorsement key material. The public endorsement key material. Creates a batch job that deletes all references to specific users from an
+ * Amazon Personalize dataset group in batches. You specify the users to delete in
+ * a CSV file of userIds in an Amazon S3 bucket. After a job completes, Amazon
+ * Personalize no longer trains on the users’ data and no longer considers the
+ * users when generating user segments. For more information about creating a data
+ * deletion job, see Deleting
+ * users. Your input file must be a CSV file with a single
+ * USER_ID column that lists the users IDs. For more information about preparing
+ * the CSV file, see Preparing
+ * your data deletion file and uploading it to Amazon S3. To
+ * give Amazon Personalize permission to access your input CSV file of userIds, you
+ * must specify an IAM service role that has permission to read from the data
+ * source. This role needs After
+ * you create a job, it can take up to a day to delete all references to the users
+ * from datasets and models. Until the job completes, Amazon Personalize continues
+ * to use the data when training. And if you use a User Segmentation recipe, the
+ * users might appear in user segments. Status A data
+ * deletion job can have one of the following statuses: PENDING
+ * > IN_PROGRESS > COMPLETED -or- FAILED To get the status
+ * of the data deletion job, call DescribeDataDeletionJob
+ * API operation and specify the Amazon Resource Name (ARN) of the job. If the
+ * status is FAILED, the response includes a Related APIs der
for a
+ * DER-encoded public key that is compatible with OpenSSL. Specify
+ * tpmt
for a TPM 2.0 format that is compatible with tpm2-tools. The
+ * returned key is base64 encoded.der
for a
+ * DER-encoded public key that is compatible with OpenSSL. Specify
+ * tpmt
for a TPM 2.0 format that is compatible with tpm2-tools. The
+ * returned key is base64 encoded.der
for a
+ * DER-encoded public key that is compatible with OpenSSL. Specify
+ * tpmt
for a TPM 2.0 format that is compatible with tpm2-tools. The
+ * returned key is base64 encoded.der
for a
+ * DER-encoded public key that is compatible with OpenSSL. Specify
+ * tpmt
for a TPM 2.0 format that is compatible with tpm2-tools. The
+ * returned key is base64 encoded.der
for a
+ * DER-encoded public key that is compatible with OpenSSL. Specify
+ * tpmt
for a TPM 2.0 format that is compatible with tpm2-tools. The
+ * returned key is base64 encoded.der
for a
+ * DER-encoded public key that is compatible with OpenSSL. Specify
+ * tpmt
for a TPM 2.0 format that is compatible with tpm2-tools. The
+ * returned key is base64 encoded.DryRunOperation
. Otherwise, the response is
+ * UnauthorizedOperation
.DryRunOperation
. Otherwise, the response is
+ * UnauthorizedOperation
.DryRunOperation
. Otherwise, the response is
+ * UnauthorizedOperation
.DryRunOperation
. Otherwise, the response is
+ * UnauthorizedOperation
.
GetObject
and ListBucket
+ * permissions for the bucket and its content. These permissions are the same as
+ * importing data. For information on granting access to your Amazon S3 bucket, see
+ * Giving
+ * Amazon Personalize Access to Amazon S3 Resources.
failureReason
key, which
+ * describes why the job failed.See Also:
AWS
+ * API Reference
Creates an empty dataset and adds it to the specified dataset group. Use CreateDatasetImportJob @@ -1131,6 +1191,33 @@ namespace Personalize return SubmitAsync(&PersonalizeClient::DescribeCampaign, request, handler, context); } + /** + *
Describes the data deletion job created by CreateDataDeletionJob, + * including the job status.
Describes the given dataset. For more information on datasets, see CreateDataset.
Returns a list of data deletion jobs for a dataset group ordered by creation + * time, with the most recent first. When a dataset group is not specified, all the + * data deletion jobs associated with the account are listed. The response provides + * the properties for each job, including the Amazon Resource Name (ARN). For more + * information on data deletion jobs, see Deleting + * users.
Returns a list of dataset export jobs that use the given dataset. When a
* dataset is not specified, all the dataset export jobs associated with the
diff --git a/generated/src/aws-cpp-sdk-personalize/include/aws/personalize/PersonalizeServiceClientModel.h b/generated/src/aws-cpp-sdk-personalize/include/aws/personalize/PersonalizeServiceClientModel.h
index eaf49271e2c..671d96929e1 100644
--- a/generated/src/aws-cpp-sdk-personalize/include/aws/personalize/PersonalizeServiceClientModel.h
+++ b/generated/src/aws-cpp-sdk-personalize/include/aws/personalize/PersonalizeServiceClientModel.h
@@ -21,6 +21,7 @@
#include The name for the data deletion job. The name for the data deletion job. The name for the data deletion job. The name for the data deletion job. The name for the data deletion job. The name for the data deletion job. The name for the data deletion job. The name for the data deletion job. The Amazon Resource Name (ARN) of the dataset group that has the datasets you
+ * want to delete records from. The Amazon Resource Name (ARN) of the dataset group that has the datasets you
+ * want to delete records from. The Amazon Resource Name (ARN) of the dataset group that has the datasets you
+ * want to delete records from. The Amazon Resource Name (ARN) of the dataset group that has the datasets you
+ * want to delete records from. The Amazon Resource Name (ARN) of the dataset group that has the datasets you
+ * want to delete records from. The Amazon Resource Name (ARN) of the dataset group that has the datasets you
+ * want to delete records from. The Amazon Resource Name (ARN) of the dataset group that has the datasets you
+ * want to delete records from. The Amazon Resource Name (ARN) of the dataset group that has the datasets you
+ * want to delete records from. The Amazon S3 bucket that contains the list of userIds of the users to
+ * delete. The Amazon S3 bucket that contains the list of userIds of the users to
+ * delete. The Amazon S3 bucket that contains the list of userIds of the users to
+ * delete. The Amazon S3 bucket that contains the list of userIds of the users to
+ * delete. The Amazon S3 bucket that contains the list of userIds of the users to
+ * delete. The Amazon S3 bucket that contains the list of userIds of the users to
+ * delete. The Amazon Resource Name (ARN) of the IAM role that has permissions to read
+ * from the Amazon S3 data source. The Amazon Resource Name (ARN) of the IAM role that has permissions to read
+ * from the Amazon S3 data source. The Amazon Resource Name (ARN) of the IAM role that has permissions to read
+ * from the Amazon S3 data source. The Amazon Resource Name (ARN) of the IAM role that has permissions to read
+ * from the Amazon S3 data source. The Amazon Resource Name (ARN) of the IAM role that has permissions to read
+ * from the Amazon S3 data source. The Amazon Resource Name (ARN) of the IAM role that has permissions to read
+ * from the Amazon S3 data source. The Amazon Resource Name (ARN) of the IAM role that has permissions to read
+ * from the Amazon S3 data source. The Amazon Resource Name (ARN) of the IAM role that has permissions to read
+ * from the Amazon S3 data source. A list of tags
+ * to apply to the data deletion job. A list of tags
+ * to apply to the data deletion job. A list of tags
+ * to apply to the data deletion job. A list of tags
+ * to apply to the data deletion job. A list of tags
+ * to apply to the data deletion job. A list of tags
+ * to apply to the data deletion job. A list of tags
+ * to apply to the data deletion job. A list of tags
+ * to apply to the data deletion job. The Amazon Resource Name (ARN) of the data deletion job. The Amazon Resource Name (ARN) of the data deletion job. The Amazon Resource Name (ARN) of the data deletion job. The Amazon Resource Name (ARN) of the data deletion job. The Amazon Resource Name (ARN) of the data deletion job. The Amazon Resource Name (ARN) of the data deletion job. The Amazon Resource Name (ARN) of the data deletion job. Describes a job that deletes all references to specific users from an Amazon
+ * Personalize dataset group in batches. For information about creating a data
+ * deletion job, see Deleting
+ * users.See Also:
AWS
+ * API Reference
The name of the data deletion job.
+ */ + inline const Aws::String& GetJobName() const{ return m_jobName; } + + /** + *The name of the data deletion job.
+ */ + inline bool JobNameHasBeenSet() const { return m_jobNameHasBeenSet; } + + /** + *The name of the data deletion job.
+ */ + inline void SetJobName(const Aws::String& value) { m_jobNameHasBeenSet = true; m_jobName = value; } + + /** + *The name of the data deletion job.
+ */ + inline void SetJobName(Aws::String&& value) { m_jobNameHasBeenSet = true; m_jobName = std::move(value); } + + /** + *The name of the data deletion job.
+ */ + inline void SetJobName(const char* value) { m_jobNameHasBeenSet = true; m_jobName.assign(value); } + + /** + *The name of the data deletion job.
+ */ + inline DataDeletionJob& WithJobName(const Aws::String& value) { SetJobName(value); return *this;} + + /** + *The name of the data deletion job.
+ */ + inline DataDeletionJob& WithJobName(Aws::String&& value) { SetJobName(std::move(value)); return *this;} + + /** + *The name of the data deletion job.
+ */ + inline DataDeletionJob& WithJobName(const char* value) { SetJobName(value); return *this;} + + + /** + *The Amazon Resource Name (ARN) of the data deletion job.
+ */ + inline const Aws::String& GetDataDeletionJobArn() const{ return m_dataDeletionJobArn; } + + /** + *The Amazon Resource Name (ARN) of the data deletion job.
+ */ + inline bool DataDeletionJobArnHasBeenSet() const { return m_dataDeletionJobArnHasBeenSet; } + + /** + *The Amazon Resource Name (ARN) of the data deletion job.
+ */ + inline void SetDataDeletionJobArn(const Aws::String& value) { m_dataDeletionJobArnHasBeenSet = true; m_dataDeletionJobArn = value; } + + /** + *The Amazon Resource Name (ARN) of the data deletion job.
+ */ + inline void SetDataDeletionJobArn(Aws::String&& value) { m_dataDeletionJobArnHasBeenSet = true; m_dataDeletionJobArn = std::move(value); } + + /** + *The Amazon Resource Name (ARN) of the data deletion job.
+ */ + inline void SetDataDeletionJobArn(const char* value) { m_dataDeletionJobArnHasBeenSet = true; m_dataDeletionJobArn.assign(value); } + + /** + *The Amazon Resource Name (ARN) of the data deletion job.
+ */ + inline DataDeletionJob& WithDataDeletionJobArn(const Aws::String& value) { SetDataDeletionJobArn(value); return *this;} + + /** + *The Amazon Resource Name (ARN) of the data deletion job.
+ */ + inline DataDeletionJob& WithDataDeletionJobArn(Aws::String&& value) { SetDataDeletionJobArn(std::move(value)); return *this;} + + /** + *The Amazon Resource Name (ARN) of the data deletion job.
+ */ + inline DataDeletionJob& WithDataDeletionJobArn(const char* value) { SetDataDeletionJobArn(value); return *this;} + + + /** + *The Amazon Resource Name (ARN) of the dataset group the job deletes records + * from.
+ */ + inline const Aws::String& GetDatasetGroupArn() const{ return m_datasetGroupArn; } + + /** + *The Amazon Resource Name (ARN) of the dataset group the job deletes records + * from.
+ */ + inline bool DatasetGroupArnHasBeenSet() const { return m_datasetGroupArnHasBeenSet; } + + /** + *The Amazon Resource Name (ARN) of the dataset group the job deletes records + * from.
+ */ + inline void SetDatasetGroupArn(const Aws::String& value) { m_datasetGroupArnHasBeenSet = true; m_datasetGroupArn = value; } + + /** + *The Amazon Resource Name (ARN) of the dataset group the job deletes records + * from.
+ */ + inline void SetDatasetGroupArn(Aws::String&& value) { m_datasetGroupArnHasBeenSet = true; m_datasetGroupArn = std::move(value); } + + /** + *The Amazon Resource Name (ARN) of the dataset group the job deletes records + * from.
+ */ + inline void SetDatasetGroupArn(const char* value) { m_datasetGroupArnHasBeenSet = true; m_datasetGroupArn.assign(value); } + + /** + *The Amazon Resource Name (ARN) of the dataset group the job deletes records + * from.
+ */ + inline DataDeletionJob& WithDatasetGroupArn(const Aws::String& value) { SetDatasetGroupArn(value); return *this;} + + /** + *The Amazon Resource Name (ARN) of the dataset group the job deletes records + * from.
+ */ + inline DataDeletionJob& WithDatasetGroupArn(Aws::String&& value) { SetDatasetGroupArn(std::move(value)); return *this;} + + /** + *The Amazon Resource Name (ARN) of the dataset group the job deletes records + * from.
+ */ + inline DataDeletionJob& WithDatasetGroupArn(const char* value) { SetDatasetGroupArn(value); return *this;} + + + + inline const DataSource& GetDataSource() const{ return m_dataSource; } + + + inline bool DataSourceHasBeenSet() const { return m_dataSourceHasBeenSet; } + + + inline void SetDataSource(const DataSource& value) { m_dataSourceHasBeenSet = true; m_dataSource = value; } + + + inline void SetDataSource(DataSource&& value) { m_dataSourceHasBeenSet = true; m_dataSource = std::move(value); } + + + inline DataDeletionJob& WithDataSource(const DataSource& value) { SetDataSource(value); return *this;} + + + inline DataDeletionJob& WithDataSource(DataSource&& value) { SetDataSource(std::move(value)); return *this;} + + + /** + *The Amazon Resource Name (ARN) of the IAM role that has permissions to read + * from the Amazon S3 data source.
+ */ + inline const Aws::String& GetRoleArn() const{ return m_roleArn; } + + /** + *The Amazon Resource Name (ARN) of the IAM role that has permissions to read + * from the Amazon S3 data source.
+ */ + inline bool RoleArnHasBeenSet() const { return m_roleArnHasBeenSet; } + + /** + *The Amazon Resource Name (ARN) of the IAM role that has permissions to read + * from the Amazon S3 data source.
+ */ + inline void SetRoleArn(const Aws::String& value) { m_roleArnHasBeenSet = true; m_roleArn = value; } + + /** + *The Amazon Resource Name (ARN) of the IAM role that has permissions to read + * from the Amazon S3 data source.
+ */ + inline void SetRoleArn(Aws::String&& value) { m_roleArnHasBeenSet = true; m_roleArn = std::move(value); } + + /** + *The Amazon Resource Name (ARN) of the IAM role that has permissions to read + * from the Amazon S3 data source.
+ */ + inline void SetRoleArn(const char* value) { m_roleArnHasBeenSet = true; m_roleArn.assign(value); } + + /** + *The Amazon Resource Name (ARN) of the IAM role that has permissions to read + * from the Amazon S3 data source.
+ */ + inline DataDeletionJob& WithRoleArn(const Aws::String& value) { SetRoleArn(value); return *this;} + + /** + *The Amazon Resource Name (ARN) of the IAM role that has permissions to read + * from the Amazon S3 data source.
+ */ + inline DataDeletionJob& WithRoleArn(Aws::String&& value) { SetRoleArn(std::move(value)); return *this;} + + /** + *The Amazon Resource Name (ARN) of the IAM role that has permissions to read + * from the Amazon S3 data source.
+ */ + inline DataDeletionJob& WithRoleArn(const char* value) { SetRoleArn(value); return *this;} + + + /** + *The status of the data deletion job.
A data deletion job can have one + * of the following statuses:
PENDING > IN_PROGRESS > + * COMPLETED -or- FAILED
The status of the data deletion job.
A data deletion job can have one + * of the following statuses:
PENDING > IN_PROGRESS > + * COMPLETED -or- FAILED
The status of the data deletion job.
A data deletion job can have one + * of the following statuses:
PENDING > IN_PROGRESS > + * COMPLETED -or- FAILED
The status of the data deletion job.
A data deletion job can have one + * of the following statuses:
PENDING > IN_PROGRESS > + * COMPLETED -or- FAILED
The status of the data deletion job.
A data deletion job can have one + * of the following statuses:
PENDING > IN_PROGRESS > + * COMPLETED -or- FAILED
The status of the data deletion job.
A data deletion job can have one + * of the following statuses:
PENDING > IN_PROGRESS > + * COMPLETED -or- FAILED
The status of the data deletion job.
A data deletion job can have one + * of the following statuses:
PENDING > IN_PROGRESS > + * COMPLETED -or- FAILED
The status of the data deletion job.
A data deletion job can have one + * of the following statuses:
PENDING > IN_PROGRESS > + * COMPLETED -or- FAILED
The number of records deleted by a COMPLETED job.
+ */ + inline int GetNumDeleted() const{ return m_numDeleted; } + + /** + *The number of records deleted by a COMPLETED job.
+ */ + inline bool NumDeletedHasBeenSet() const { return m_numDeletedHasBeenSet; } + + /** + *The number of records deleted by a COMPLETED job.
+ */ + inline void SetNumDeleted(int value) { m_numDeletedHasBeenSet = true; m_numDeleted = value; } + + /** + *The number of records deleted by a COMPLETED job.
+ */ + inline DataDeletionJob& WithNumDeleted(int value) { SetNumDeleted(value); return *this;} + + + /** + *The creation date and time (in Unix time) of the data deletion job.
+ */ + inline const Aws::Utils::DateTime& GetCreationDateTime() const{ return m_creationDateTime; } + + /** + *The creation date and time (in Unix time) of the data deletion job.
+ */ + inline bool CreationDateTimeHasBeenSet() const { return m_creationDateTimeHasBeenSet; } + + /** + *The creation date and time (in Unix time) of the data deletion job.
+ */ + inline void SetCreationDateTime(const Aws::Utils::DateTime& value) { m_creationDateTimeHasBeenSet = true; m_creationDateTime = value; } + + /** + *The creation date and time (in Unix time) of the data deletion job.
+ */ + inline void SetCreationDateTime(Aws::Utils::DateTime&& value) { m_creationDateTimeHasBeenSet = true; m_creationDateTime = std::move(value); } + + /** + *The creation date and time (in Unix time) of the data deletion job.
+ */ + inline DataDeletionJob& WithCreationDateTime(const Aws::Utils::DateTime& value) { SetCreationDateTime(value); return *this;} + + /** + *The creation date and time (in Unix time) of the data deletion job.
+ */ + inline DataDeletionJob& WithCreationDateTime(Aws::Utils::DateTime&& value) { SetCreationDateTime(std::move(value)); return *this;} + + + /** + *The date and time (in Unix time) the data deletion job was last updated.
+ */ + inline const Aws::Utils::DateTime& GetLastUpdatedDateTime() const{ return m_lastUpdatedDateTime; } + + /** + *The date and time (in Unix time) the data deletion job was last updated.
+ */ + inline bool LastUpdatedDateTimeHasBeenSet() const { return m_lastUpdatedDateTimeHasBeenSet; } + + /** + *The date and time (in Unix time) the data deletion job was last updated.
+ */ + inline void SetLastUpdatedDateTime(const Aws::Utils::DateTime& value) { m_lastUpdatedDateTimeHasBeenSet = true; m_lastUpdatedDateTime = value; } + + /** + *The date and time (in Unix time) the data deletion job was last updated.
+ */ + inline void SetLastUpdatedDateTime(Aws::Utils::DateTime&& value) { m_lastUpdatedDateTimeHasBeenSet = true; m_lastUpdatedDateTime = std::move(value); } + + /** + *The date and time (in Unix time) the data deletion job was last updated.
+ */ + inline DataDeletionJob& WithLastUpdatedDateTime(const Aws::Utils::DateTime& value) { SetLastUpdatedDateTime(value); return *this;} + + /** + *The date and time (in Unix time) the data deletion job was last updated.
+ */ + inline DataDeletionJob& WithLastUpdatedDateTime(Aws::Utils::DateTime&& value) { SetLastUpdatedDateTime(std::move(value)); return *this;} + + + /** + *If a data deletion job fails, provides the reason why.
+ */ + inline const Aws::String& GetFailureReason() const{ return m_failureReason; } + + /** + *If a data deletion job fails, provides the reason why.
+ */ + inline bool FailureReasonHasBeenSet() const { return m_failureReasonHasBeenSet; } + + /** + *If a data deletion job fails, provides the reason why.
+ */ + inline void SetFailureReason(const Aws::String& value) { m_failureReasonHasBeenSet = true; m_failureReason = value; } + + /** + *If a data deletion job fails, provides the reason why.
+ */ + inline void SetFailureReason(Aws::String&& value) { m_failureReasonHasBeenSet = true; m_failureReason = std::move(value); } + + /** + *If a data deletion job fails, provides the reason why.
+ */ + inline void SetFailureReason(const char* value) { m_failureReasonHasBeenSet = true; m_failureReason.assign(value); } + + /** + *If a data deletion job fails, provides the reason why.
+ */ + inline DataDeletionJob& WithFailureReason(const Aws::String& value) { SetFailureReason(value); return *this;} + + /** + *If a data deletion job fails, provides the reason why.
+ */ + inline DataDeletionJob& WithFailureReason(Aws::String&& value) { SetFailureReason(std::move(value)); return *this;} + + /** + *If a data deletion job fails, provides the reason why.
+ */ + inline DataDeletionJob& WithFailureReason(const char* value) { SetFailureReason(value); return *this;} + + private: + + Aws::String m_jobName; + bool m_jobNameHasBeenSet = false; + + Aws::String m_dataDeletionJobArn; + bool m_dataDeletionJobArnHasBeenSet = false; + + Aws::String m_datasetGroupArn; + bool m_datasetGroupArnHasBeenSet = false; + + DataSource m_dataSource; + bool m_dataSourceHasBeenSet = false; + + Aws::String m_roleArn; + bool m_roleArnHasBeenSet = false; + + Aws::String m_status; + bool m_statusHasBeenSet = false; + + int m_numDeleted; + bool m_numDeletedHasBeenSet = false; + + Aws::Utils::DateTime m_creationDateTime; + bool m_creationDateTimeHasBeenSet = false; + + Aws::Utils::DateTime m_lastUpdatedDateTime; + bool m_lastUpdatedDateTimeHasBeenSet = false; + + Aws::String m_failureReason; + bool m_failureReasonHasBeenSet = false; + }; + +} // namespace Model +} // namespace Personalize +} // namespace Aws diff --git a/generated/src/aws-cpp-sdk-personalize/include/aws/personalize/model/DataDeletionJobSummary.h b/generated/src/aws-cpp-sdk-personalize/include/aws/personalize/model/DataDeletionJobSummary.h new file mode 100644 index 00000000000..60fd738e6db --- /dev/null +++ b/generated/src/aws-cpp-sdk-personalize/include/aws/personalize/model/DataDeletionJobSummary.h @@ -0,0 +1,360 @@ +/** + * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. + * SPDX-License-Identifier: Apache-2.0. + */ + +#pragma once +#includeProvides a summary of the properties of a data deletion job. For a complete + * listing, call the DescribeDataDeletionJob + * API operation.
The Amazon Resource Name (ARN) of the data deletion job.
+ */ + inline const Aws::String& GetDataDeletionJobArn() const{ return m_dataDeletionJobArn; } + + /** + *The Amazon Resource Name (ARN) of the data deletion job.
+ */ + inline bool DataDeletionJobArnHasBeenSet() const { return m_dataDeletionJobArnHasBeenSet; } + + /** + *The Amazon Resource Name (ARN) of the data deletion job.
+ */ + inline void SetDataDeletionJobArn(const Aws::String& value) { m_dataDeletionJobArnHasBeenSet = true; m_dataDeletionJobArn = value; } + + /** + *The Amazon Resource Name (ARN) of the data deletion job.
+ */ + inline void SetDataDeletionJobArn(Aws::String&& value) { m_dataDeletionJobArnHasBeenSet = true; m_dataDeletionJobArn = std::move(value); } + + /** + *The Amazon Resource Name (ARN) of the data deletion job.
+ */ + inline void SetDataDeletionJobArn(const char* value) { m_dataDeletionJobArnHasBeenSet = true; m_dataDeletionJobArn.assign(value); } + + /** + *The Amazon Resource Name (ARN) of the data deletion job.
+ */ + inline DataDeletionJobSummary& WithDataDeletionJobArn(const Aws::String& value) { SetDataDeletionJobArn(value); return *this;} + + /** + *The Amazon Resource Name (ARN) of the data deletion job.
+ */ + inline DataDeletionJobSummary& WithDataDeletionJobArn(Aws::String&& value) { SetDataDeletionJobArn(std::move(value)); return *this;} + + /** + *The Amazon Resource Name (ARN) of the data deletion job.
+ */ + inline DataDeletionJobSummary& WithDataDeletionJobArn(const char* value) { SetDataDeletionJobArn(value); return *this;} + + + /** + *The Amazon Resource Name (ARN) of the dataset group the job deleted records + * from.
+ */ + inline const Aws::String& GetDatasetGroupArn() const{ return m_datasetGroupArn; } + + /** + *The Amazon Resource Name (ARN) of the dataset group the job deleted records + * from.
+ */ + inline bool DatasetGroupArnHasBeenSet() const { return m_datasetGroupArnHasBeenSet; } + + /** + *The Amazon Resource Name (ARN) of the dataset group the job deleted records + * from.
+ */ + inline void SetDatasetGroupArn(const Aws::String& value) { m_datasetGroupArnHasBeenSet = true; m_datasetGroupArn = value; } + + /** + *The Amazon Resource Name (ARN) of the dataset group the job deleted records + * from.
+ */ + inline void SetDatasetGroupArn(Aws::String&& value) { m_datasetGroupArnHasBeenSet = true; m_datasetGroupArn = std::move(value); } + + /** + *The Amazon Resource Name (ARN) of the dataset group the job deleted records + * from.
+ */ + inline void SetDatasetGroupArn(const char* value) { m_datasetGroupArnHasBeenSet = true; m_datasetGroupArn.assign(value); } + + /** + *The Amazon Resource Name (ARN) of the dataset group the job deleted records + * from.
+ */ + inline DataDeletionJobSummary& WithDatasetGroupArn(const Aws::String& value) { SetDatasetGroupArn(value); return *this;} + + /** + *The Amazon Resource Name (ARN) of the dataset group the job deleted records + * from.
+ */ + inline DataDeletionJobSummary& WithDatasetGroupArn(Aws::String&& value) { SetDatasetGroupArn(std::move(value)); return *this;} + + /** + *The Amazon Resource Name (ARN) of the dataset group the job deleted records + * from.
+ */ + inline DataDeletionJobSummary& WithDatasetGroupArn(const char* value) { SetDatasetGroupArn(value); return *this;} + + + /** + *The name of the data deletion job.
+ */ + inline const Aws::String& GetJobName() const{ return m_jobName; } + + /** + *The name of the data deletion job.
+ */ + inline bool JobNameHasBeenSet() const { return m_jobNameHasBeenSet; } + + /** + *The name of the data deletion job.
+ */ + inline void SetJobName(const Aws::String& value) { m_jobNameHasBeenSet = true; m_jobName = value; } + + /** + *The name of the data deletion job.
+ */ + inline void SetJobName(Aws::String&& value) { m_jobNameHasBeenSet = true; m_jobName = std::move(value); } + + /** + *The name of the data deletion job.
+ */ + inline void SetJobName(const char* value) { m_jobNameHasBeenSet = true; m_jobName.assign(value); } + + /** + *The name of the data deletion job.
+ */ + inline DataDeletionJobSummary& WithJobName(const Aws::String& value) { SetJobName(value); return *this;} + + /** + *The name of the data deletion job.
+ */ + inline DataDeletionJobSummary& WithJobName(Aws::String&& value) { SetJobName(std::move(value)); return *this;} + + /** + *The name of the data deletion job.
+ */ + inline DataDeletionJobSummary& WithJobName(const char* value) { SetJobName(value); return *this;} + + + /** + *The status of the data deletion job.
A data deletion job can have one + * of the following statuses:
PENDING > IN_PROGRESS > + * COMPLETED -or- FAILED
The status of the data deletion job.
A data deletion job can have one + * of the following statuses:
PENDING > IN_PROGRESS > + * COMPLETED -or- FAILED
The status of the data deletion job.
A data deletion job can have one + * of the following statuses:
PENDING > IN_PROGRESS > + * COMPLETED -or- FAILED
The status of the data deletion job.
A data deletion job can have one + * of the following statuses:
PENDING > IN_PROGRESS > + * COMPLETED -or- FAILED
The status of the data deletion job.
A data deletion job can have one + * of the following statuses:
PENDING > IN_PROGRESS > + * COMPLETED -or- FAILED
The status of the data deletion job.
A data deletion job can have one + * of the following statuses:
PENDING > IN_PROGRESS > + * COMPLETED -or- FAILED
The status of the data deletion job.
A data deletion job can have one + * of the following statuses:
PENDING > IN_PROGRESS > + * COMPLETED -or- FAILED
The status of the data deletion job.
A data deletion job can have one + * of the following statuses:
PENDING > IN_PROGRESS > + * COMPLETED -or- FAILED
The creation date and time (in Unix time) of the data deletion job.
+ */ + inline const Aws::Utils::DateTime& GetCreationDateTime() const{ return m_creationDateTime; } + + /** + *The creation date and time (in Unix time) of the data deletion job.
+ */ + inline bool CreationDateTimeHasBeenSet() const { return m_creationDateTimeHasBeenSet; } + + /** + *The creation date and time (in Unix time) of the data deletion job.
+ */ + inline void SetCreationDateTime(const Aws::Utils::DateTime& value) { m_creationDateTimeHasBeenSet = true; m_creationDateTime = value; } + + /** + *The creation date and time (in Unix time) of the data deletion job.
+ */ + inline void SetCreationDateTime(Aws::Utils::DateTime&& value) { m_creationDateTimeHasBeenSet = true; m_creationDateTime = std::move(value); } + + /** + *The creation date and time (in Unix time) of the data deletion job.
+ */ + inline DataDeletionJobSummary& WithCreationDateTime(const Aws::Utils::DateTime& value) { SetCreationDateTime(value); return *this;} + + /** + *The creation date and time (in Unix time) of the data deletion job.
+ */ + inline DataDeletionJobSummary& WithCreationDateTime(Aws::Utils::DateTime&& value) { SetCreationDateTime(std::move(value)); return *this;} + + + /** + *The date and time (in Unix time) the data deletion job was last updated.
+ */ + inline const Aws::Utils::DateTime& GetLastUpdatedDateTime() const{ return m_lastUpdatedDateTime; } + + /** + *The date and time (in Unix time) the data deletion job was last updated.
+ */ + inline bool LastUpdatedDateTimeHasBeenSet() const { return m_lastUpdatedDateTimeHasBeenSet; } + + /** + *The date and time (in Unix time) the data deletion job was last updated.
+ */ + inline void SetLastUpdatedDateTime(const Aws::Utils::DateTime& value) { m_lastUpdatedDateTimeHasBeenSet = true; m_lastUpdatedDateTime = value; } + + /** + *The date and time (in Unix time) the data deletion job was last updated.
+ */ + inline void SetLastUpdatedDateTime(Aws::Utils::DateTime&& value) { m_lastUpdatedDateTimeHasBeenSet = true; m_lastUpdatedDateTime = std::move(value); } + + /** + *The date and time (in Unix time) the data deletion job was last updated.
+ */ + inline DataDeletionJobSummary& WithLastUpdatedDateTime(const Aws::Utils::DateTime& value) { SetLastUpdatedDateTime(value); return *this;} + + /** + *The date and time (in Unix time) the data deletion job was last updated.
+ */ + inline DataDeletionJobSummary& WithLastUpdatedDateTime(Aws::Utils::DateTime&& value) { SetLastUpdatedDateTime(std::move(value)); return *this;} + + + /** + *If a data deletion job fails, provides the reason why.
+ */ + inline const Aws::String& GetFailureReason() const{ return m_failureReason; } + + /** + *If a data deletion job fails, provides the reason why.
+ */ + inline bool FailureReasonHasBeenSet() const { return m_failureReasonHasBeenSet; } + + /** + *If a data deletion job fails, provides the reason why.
+ */ + inline void SetFailureReason(const Aws::String& value) { m_failureReasonHasBeenSet = true; m_failureReason = value; } + + /** + *If a data deletion job fails, provides the reason why.
+ */ + inline void SetFailureReason(Aws::String&& value) { m_failureReasonHasBeenSet = true; m_failureReason = std::move(value); } + + /** + *If a data deletion job fails, provides the reason why.
+ */ + inline void SetFailureReason(const char* value) { m_failureReasonHasBeenSet = true; m_failureReason.assign(value); } + + /** + *If a data deletion job fails, provides the reason why.
+ */ + inline DataDeletionJobSummary& WithFailureReason(const Aws::String& value) { SetFailureReason(value); return *this;} + + /** + *If a data deletion job fails, provides the reason why.
+ */ + inline DataDeletionJobSummary& WithFailureReason(Aws::String&& value) { SetFailureReason(std::move(value)); return *this;} + + /** + *If a data deletion job fails, provides the reason why.
+ */ + inline DataDeletionJobSummary& WithFailureReason(const char* value) { SetFailureReason(value); return *this;} + + private: + + Aws::String m_dataDeletionJobArn; + bool m_dataDeletionJobArnHasBeenSet = false; + + Aws::String m_datasetGroupArn; + bool m_datasetGroupArnHasBeenSet = false; + + Aws::String m_jobName; + bool m_jobNameHasBeenSet = false; + + Aws::String m_status; + bool m_statusHasBeenSet = false; + + Aws::Utils::DateTime m_creationDateTime; + bool m_creationDateTimeHasBeenSet = false; + + Aws::Utils::DateTime m_lastUpdatedDateTime; + bool m_lastUpdatedDateTimeHasBeenSet = false; + + Aws::String m_failureReason; + bool m_failureReasonHasBeenSet = false; + }; + +} // namespace Model +} // namespace Personalize +} // namespace Aws diff --git a/generated/src/aws-cpp-sdk-personalize/include/aws/personalize/model/DataSource.h b/generated/src/aws-cpp-sdk-personalize/include/aws/personalize/model/DataSource.h index 9bec84e0be0..21dede3d304 100644 --- a/generated/src/aws-cpp-sdk-personalize/include/aws/personalize/model/DataSource.h +++ b/generated/src/aws-cpp-sdk-personalize/include/aws/personalize/model/DataSource.h @@ -24,8 +24,9 @@ namespace Model { /** - *Describes the data source that contains the data to upload to a - * dataset.
The path to the Amazon S3 bucket where the data that you want to upload to - * your dataset is stored. For example:
- * s3://bucket-name/folder-name/
For dataset import jobs, the path to the Amazon S3 bucket where the data that + * you want to upload to your dataset is stored. For data deletion jobs, the path + * to the Amazon S3 bucket that stores the list of records to delete.
For + * example:
s3://bucket-name/folder-name/fileName.csv
If your CSV files are in a folder in your Amazon S3 bucket and you want your
+ * import job or data deletion job to consider multiple files, you can specify the
+ * path to the folder. With a data deletion job, Amazon Personalize uses all files
+ * in the folder and any sub folder. Use the following syntax with a /
+ * after the folder name:
s3://bucket-name/folder-name/
The path to the Amazon S3 bucket where the data that you want to upload to - * your dataset is stored. For example:
- * s3://bucket-name/folder-name/
For dataset import jobs, the path to the Amazon S3 bucket where the data that + * you want to upload to your dataset is stored. For data deletion jobs, the path + * to the Amazon S3 bucket that stores the list of records to delete.
For + * example:
s3://bucket-name/folder-name/fileName.csv
If your CSV files are in a folder in your Amazon S3 bucket and you want your
+ * import job or data deletion job to consider multiple files, you can specify the
+ * path to the folder. With a data deletion job, Amazon Personalize uses all files
+ * in the folder and any sub folder. Use the following syntax with a /
+ * after the folder name:
s3://bucket-name/folder-name/
The path to the Amazon S3 bucket where the data that you want to upload to - * your dataset is stored. For example:
- * s3://bucket-name/folder-name/
For dataset import jobs, the path to the Amazon S3 bucket where the data that + * you want to upload to your dataset is stored. For data deletion jobs, the path + * to the Amazon S3 bucket that stores the list of records to delete.
For + * example:
s3://bucket-name/folder-name/fileName.csv
If your CSV files are in a folder in your Amazon S3 bucket and you want your
+ * import job or data deletion job to consider multiple files, you can specify the
+ * path to the folder. With a data deletion job, Amazon Personalize uses all files
+ * in the folder and any sub folder. Use the following syntax with a /
+ * after the folder name:
s3://bucket-name/folder-name/
The path to the Amazon S3 bucket where the data that you want to upload to - * your dataset is stored. For example:
- * s3://bucket-name/folder-name/
For dataset import jobs, the path to the Amazon S3 bucket where the data that + * you want to upload to your dataset is stored. For data deletion jobs, the path + * to the Amazon S3 bucket that stores the list of records to delete.
For + * example:
s3://bucket-name/folder-name/fileName.csv
If your CSV files are in a folder in your Amazon S3 bucket and you want your
+ * import job or data deletion job to consider multiple files, you can specify the
+ * path to the folder. With a data deletion job, Amazon Personalize uses all files
+ * in the folder and any sub folder. Use the following syntax with a /
+ * after the folder name:
s3://bucket-name/folder-name/
The path to the Amazon S3 bucket where the data that you want to upload to - * your dataset is stored. For example:
- * s3://bucket-name/folder-name/
For dataset import jobs, the path to the Amazon S3 bucket where the data that + * you want to upload to your dataset is stored. For data deletion jobs, the path + * to the Amazon S3 bucket that stores the list of records to delete.
For + * example:
s3://bucket-name/folder-name/fileName.csv
If your CSV files are in a folder in your Amazon S3 bucket and you want your
+ * import job or data deletion job to consider multiple files, you can specify the
+ * path to the folder. With a data deletion job, Amazon Personalize uses all files
+ * in the folder and any sub folder. Use the following syntax with a /
+ * after the folder name:
s3://bucket-name/folder-name/
The path to the Amazon S3 bucket where the data that you want to upload to - * your dataset is stored. For example:
- * s3://bucket-name/folder-name/
For dataset import jobs, the path to the Amazon S3 bucket where the data that + * you want to upload to your dataset is stored. For data deletion jobs, the path + * to the Amazon S3 bucket that stores the list of records to delete.
For + * example:
s3://bucket-name/folder-name/fileName.csv
If your CSV files are in a folder in your Amazon S3 bucket and you want your
+ * import job or data deletion job to consider multiple files, you can specify the
+ * path to the folder. With a data deletion job, Amazon Personalize uses all files
+ * in the folder and any sub folder. Use the following syntax with a /
+ * after the folder name:
s3://bucket-name/folder-name/
The path to the Amazon S3 bucket where the data that you want to upload to - * your dataset is stored. For example:
- * s3://bucket-name/folder-name/
For dataset import jobs, the path to the Amazon S3 bucket where the data that + * you want to upload to your dataset is stored. For data deletion jobs, the path + * to the Amazon S3 bucket that stores the list of records to delete.
For + * example:
s3://bucket-name/folder-name/fileName.csv
If your CSV files are in a folder in your Amazon S3 bucket and you want your
+ * import job or data deletion job to consider multiple files, you can specify the
+ * path to the folder. With a data deletion job, Amazon Personalize uses all files
+ * in the folder and any sub folder. Use the following syntax with a /
+ * after the folder name:
s3://bucket-name/folder-name/
The path to the Amazon S3 bucket where the data that you want to upload to - * your dataset is stored. For example:
- * s3://bucket-name/folder-name/
For dataset import jobs, the path to the Amazon S3 bucket where the data that + * you want to upload to your dataset is stored. For data deletion jobs, the path + * to the Amazon S3 bucket that stores the list of records to delete.
For + * example:
s3://bucket-name/folder-name/fileName.csv
If your CSV files are in a folder in your Amazon S3 bucket and you want your
+ * import job or data deletion job to consider multiple files, you can specify the
+ * path to the folder. With a data deletion job, Amazon Personalize uses all files
+ * in the folder and any sub folder. Use the following syntax with a /
+ * after the folder name:
s3://bucket-name/folder-name/
The Amazon Resource Name (ARN) of the data deletion job.
+ */ + inline const Aws::String& GetDataDeletionJobArn() const{ return m_dataDeletionJobArn; } + + /** + *The Amazon Resource Name (ARN) of the data deletion job.
+ */ + inline bool DataDeletionJobArnHasBeenSet() const { return m_dataDeletionJobArnHasBeenSet; } + + /** + *The Amazon Resource Name (ARN) of the data deletion job.
+ */ + inline void SetDataDeletionJobArn(const Aws::String& value) { m_dataDeletionJobArnHasBeenSet = true; m_dataDeletionJobArn = value; } + + /** + *The Amazon Resource Name (ARN) of the data deletion job.
+ */ + inline void SetDataDeletionJobArn(Aws::String&& value) { m_dataDeletionJobArnHasBeenSet = true; m_dataDeletionJobArn = std::move(value); } + + /** + *The Amazon Resource Name (ARN) of the data deletion job.
+ */ + inline void SetDataDeletionJobArn(const char* value) { m_dataDeletionJobArnHasBeenSet = true; m_dataDeletionJobArn.assign(value); } + + /** + *The Amazon Resource Name (ARN) of the data deletion job.
+ */ + inline DescribeDataDeletionJobRequest& WithDataDeletionJobArn(const Aws::String& value) { SetDataDeletionJobArn(value); return *this;} + + /** + *The Amazon Resource Name (ARN) of the data deletion job.
+ */ + inline DescribeDataDeletionJobRequest& WithDataDeletionJobArn(Aws::String&& value) { SetDataDeletionJobArn(std::move(value)); return *this;} + + /** + *The Amazon Resource Name (ARN) of the data deletion job.
+ */ + inline DescribeDataDeletionJobRequest& WithDataDeletionJobArn(const char* value) { SetDataDeletionJobArn(value); return *this;} + + private: + + Aws::String m_dataDeletionJobArn; + bool m_dataDeletionJobArnHasBeenSet = false; + }; + +} // namespace Model +} // namespace Personalize +} // namespace Aws diff --git a/generated/src/aws-cpp-sdk-personalize/include/aws/personalize/model/DescribeDataDeletionJobResult.h b/generated/src/aws-cpp-sdk-personalize/include/aws/personalize/model/DescribeDataDeletionJobResult.h new file mode 100644 index 00000000000..e5ab1d9f567 --- /dev/null +++ b/generated/src/aws-cpp-sdk-personalize/include/aws/personalize/model/DescribeDataDeletionJobResult.h @@ -0,0 +1,107 @@ +/** + * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. + * SPDX-License-Identifier: Apache-2.0. + */ + +#pragma once +#includeInformation about the data deletion job, including the status.
The + * status is one of the following values:
PENDING
IN_PROGRESS
COMPLETED
FAILED
Information about the data deletion job, including the status.
The + * status is one of the following values:
PENDING
IN_PROGRESS
COMPLETED
FAILED
Information about the data deletion job, including the status.
The + * status is one of the following values:
PENDING
IN_PROGRESS
COMPLETED
FAILED
Information about the data deletion job, including the status.
The + * status is one of the following values:
PENDING
IN_PROGRESS
COMPLETED
FAILED
Information about the data deletion job, including the status.
The + * status is one of the following values:
PENDING
IN_PROGRESS
COMPLETED
FAILED
The Amazon Resource Name (ARN) of the dataset group to list data deletion + * jobs for.
+ */ + inline const Aws::String& GetDatasetGroupArn() const{ return m_datasetGroupArn; } + + /** + *The Amazon Resource Name (ARN) of the dataset group to list data deletion + * jobs for.
+ */ + inline bool DatasetGroupArnHasBeenSet() const { return m_datasetGroupArnHasBeenSet; } + + /** + *The Amazon Resource Name (ARN) of the dataset group to list data deletion + * jobs for.
+ */ + inline void SetDatasetGroupArn(const Aws::String& value) { m_datasetGroupArnHasBeenSet = true; m_datasetGroupArn = value; } + + /** + *The Amazon Resource Name (ARN) of the dataset group to list data deletion + * jobs for.
+ */ + inline void SetDatasetGroupArn(Aws::String&& value) { m_datasetGroupArnHasBeenSet = true; m_datasetGroupArn = std::move(value); } + + /** + *The Amazon Resource Name (ARN) of the dataset group to list data deletion + * jobs for.
+ */ + inline void SetDatasetGroupArn(const char* value) { m_datasetGroupArnHasBeenSet = true; m_datasetGroupArn.assign(value); } + + /** + *The Amazon Resource Name (ARN) of the dataset group to list data deletion + * jobs for.
+ */ + inline ListDataDeletionJobsRequest& WithDatasetGroupArn(const Aws::String& value) { SetDatasetGroupArn(value); return *this;} + + /** + *The Amazon Resource Name (ARN) of the dataset group to list data deletion + * jobs for.
+ */ + inline ListDataDeletionJobsRequest& WithDatasetGroupArn(Aws::String&& value) { SetDatasetGroupArn(std::move(value)); return *this;} + + /** + *The Amazon Resource Name (ARN) of the dataset group to list data deletion + * jobs for.
+ */ + inline ListDataDeletionJobsRequest& WithDatasetGroupArn(const char* value) { SetDatasetGroupArn(value); return *this;} + + + /** + *A token returned from the previous call to ListDataDeletionJobs
+ * for getting the next set of jobs (if they exist).
A token returned from the previous call to ListDataDeletionJobs
+ * for getting the next set of jobs (if they exist).
A token returned from the previous call to ListDataDeletionJobs
+ * for getting the next set of jobs (if they exist).
A token returned from the previous call to ListDataDeletionJobs
+ * for getting the next set of jobs (if they exist).
A token returned from the previous call to ListDataDeletionJobs
+ * for getting the next set of jobs (if they exist).
A token returned from the previous call to ListDataDeletionJobs
+ * for getting the next set of jobs (if they exist).
A token returned from the previous call to ListDataDeletionJobs
+ * for getting the next set of jobs (if they exist).
A token returned from the previous call to ListDataDeletionJobs
+ * for getting the next set of jobs (if they exist).
The maximum number of data deletion jobs to return.
+ */ + inline int GetMaxResults() const{ return m_maxResults; } + + /** + *The maximum number of data deletion jobs to return.
+ */ + inline bool MaxResultsHasBeenSet() const { return m_maxResultsHasBeenSet; } + + /** + *The maximum number of data deletion jobs to return.
+ */ + inline void SetMaxResults(int value) { m_maxResultsHasBeenSet = true; m_maxResults = value; } + + /** + *The maximum number of data deletion jobs to return.
+ */ + inline ListDataDeletionJobsRequest& WithMaxResults(int value) { SetMaxResults(value); return *this;} + + private: + + Aws::String m_datasetGroupArn; + bool m_datasetGroupArnHasBeenSet = false; + + Aws::String m_nextToken; + bool m_nextTokenHasBeenSet = false; + + int m_maxResults; + bool m_maxResultsHasBeenSet = false; + }; + +} // namespace Model +} // namespace Personalize +} // namespace Aws diff --git a/generated/src/aws-cpp-sdk-personalize/include/aws/personalize/model/ListDataDeletionJobsResult.h b/generated/src/aws-cpp-sdk-personalize/include/aws/personalize/model/ListDataDeletionJobsResult.h new file mode 100644 index 00000000000..9d89847795e --- /dev/null +++ b/generated/src/aws-cpp-sdk-personalize/include/aws/personalize/model/ListDataDeletionJobsResult.h @@ -0,0 +1,141 @@ +/** + * Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. + * SPDX-License-Identifier: Apache-2.0. + */ + +#pragma once +#includeThe list of data deletion jobs.
+ */ + inline const Aws::VectorThe list of data deletion jobs.
+ */ + inline void SetDataDeletionJobs(const Aws::VectorThe list of data deletion jobs.
+ */ + inline void SetDataDeletionJobs(Aws::VectorThe list of data deletion jobs.
+ */ + inline ListDataDeletionJobsResult& WithDataDeletionJobs(const Aws::VectorThe list of data deletion jobs.
+ */ + inline ListDataDeletionJobsResult& WithDataDeletionJobs(Aws::VectorThe list of data deletion jobs.
+ */ + inline ListDataDeletionJobsResult& AddDataDeletionJobs(const DataDeletionJobSummary& value) { m_dataDeletionJobs.push_back(value); return *this; } + + /** + *The list of data deletion jobs.
+ */ + inline ListDataDeletionJobsResult& AddDataDeletionJobs(DataDeletionJobSummary&& value) { m_dataDeletionJobs.push_back(std::move(value)); return *this; } + + + /** + *A token for getting the next set of data deletion jobs (if they exist).
+ */ + inline const Aws::String& GetNextToken() const{ return m_nextToken; } + + /** + *A token for getting the next set of data deletion jobs (if they exist).
+ */ + inline void SetNextToken(const Aws::String& value) { m_nextToken = value; } + + /** + *A token for getting the next set of data deletion jobs (if they exist).
+ */ + inline void SetNextToken(Aws::String&& value) { m_nextToken = std::move(value); } + + /** + *A token for getting the next set of data deletion jobs (if they exist).
+ */ + inline void SetNextToken(const char* value) { m_nextToken.assign(value); } + + /** + *A token for getting the next set of data deletion jobs (if they exist).
+ */ + inline ListDataDeletionJobsResult& WithNextToken(const Aws::String& value) { SetNextToken(value); return *this;} + + /** + *A token for getting the next set of data deletion jobs (if they exist).
+ */ + inline ListDataDeletionJobsResult& WithNextToken(Aws::String&& value) { SetNextToken(std::move(value)); return *this;} + + /** + *A token for getting the next set of data deletion jobs (if they exist).
+ */ + inline ListDataDeletionJobsResult& WithNextToken(const char* value) { SetNextToken(value); return *this;} + + + + inline const Aws::String& GetRequestId() const{ return m_requestId; } + + + inline void SetRequestId(const Aws::String& value) { m_requestId = value; } + + + inline void SetRequestId(Aws::String&& value) { m_requestId = std::move(value); } + + + inline void SetRequestId(const char* value) { m_requestId.assign(value); } + + + inline ListDataDeletionJobsResult& WithRequestId(const Aws::String& value) { SetRequestId(value); return *this;} + + + inline ListDataDeletionJobsResult& WithRequestId(Aws::String&& value) { SetRequestId(std::move(value)); return *this;} + + + inline ListDataDeletionJobsResult& WithRequestId(const char* value) { SetRequestId(value); return *this;} + + private: + + Aws::VectorThe key of the parameter. The options are auto_mv
,
* datestyle
, enable_case_sensitive_identifier
,
* enable_user_activity_logging
, query_group
,
- * search_path
, require_ssl
, and query monitoring metrics
- * that let you define performance boundaries. For more information about query
- * monitoring rules and available metrics, see search_path, require_ssl
, use_fips_ssl
,
+ * and query monitoring metrics that let you define performance boundaries. For
+ * more information about query monitoring rules and available metrics, see Query
* monitoring metrics for Amazon Redshift Serverless.
The key of the parameter. The options are auto_mv
,
* datestyle
, enable_case_sensitive_identifier
,
* enable_user_activity_logging
, query_group
,
- * search_path
, require_ssl
, and query monitoring metrics
- * that let you define performance boundaries. For more information about query
- * monitoring rules and available metrics, see search_path, require_ssl
, use_fips_ssl
,
+ * and query monitoring metrics that let you define performance boundaries. For
+ * more information about query monitoring rules and available metrics, see Query
* monitoring metrics for Amazon Redshift Serverless.
The key of the parameter. The options are auto_mv
,
* datestyle
, enable_case_sensitive_identifier
,
* enable_user_activity_logging
, query_group
,
- * search_path
, require_ssl
, and query monitoring metrics
- * that let you define performance boundaries. For more information about query
- * monitoring rules and available metrics, see search_path, require_ssl
, use_fips_ssl
,
+ * and query monitoring metrics that let you define performance boundaries. For
+ * more information about query monitoring rules and available metrics, see Query
* monitoring metrics for Amazon Redshift Serverless.
The key of the parameter. The options are auto_mv
,
* datestyle
, enable_case_sensitive_identifier
,
* enable_user_activity_logging
, query_group
,
- * search_path
, require_ssl
, and query monitoring metrics
- * that let you define performance boundaries. For more information about query
- * monitoring rules and available metrics, see search_path, require_ssl
, use_fips_ssl
,
+ * and query monitoring metrics that let you define performance boundaries. For
+ * more information about query monitoring rules and available metrics, see Query
* monitoring metrics for Amazon Redshift Serverless.
The key of the parameter. The options are auto_mv
,
* datestyle
, enable_case_sensitive_identifier
,
* enable_user_activity_logging
, query_group
,
- * search_path
, require_ssl
, and query monitoring metrics
- * that let you define performance boundaries. For more information about query
- * monitoring rules and available metrics, see search_path, require_ssl
, use_fips_ssl
,
+ * and query monitoring metrics that let you define performance boundaries. For
+ * more information about query monitoring rules and available metrics, see Query
* monitoring metrics for Amazon Redshift Serverless.
The key of the parameter. The options are auto_mv
,
* datestyle
, enable_case_sensitive_identifier
,
* enable_user_activity_logging
, query_group
,
- * search_path
, require_ssl
, and query monitoring metrics
- * that let you define performance boundaries. For more information about query
- * monitoring rules and available metrics, see search_path, require_ssl
, use_fips_ssl
,
+ * and query monitoring metrics that let you define performance boundaries. For
+ * more information about query monitoring rules and available metrics, see Query
* monitoring metrics for Amazon Redshift Serverless.
The key of the parameter. The options are auto_mv
,
* datestyle
, enable_case_sensitive_identifier
,
* enable_user_activity_logging
, query_group
,
- * search_path
, require_ssl
, and query monitoring metrics
- * that let you define performance boundaries. For more information about query
- * monitoring rules and available metrics, see search_path, require_ssl
, use_fips_ssl
,
+ * and query monitoring metrics that let you define performance boundaries. For
+ * more information about query monitoring rules and available metrics, see Query
* monitoring metrics for Amazon Redshift Serverless.
The key of the parameter. The options are auto_mv
,
* datestyle
, enable_case_sensitive_identifier
,
* enable_user_activity_logging
, query_group
,
- * search_path
, require_ssl
, and query monitoring metrics
- * that let you define performance boundaries. For more information about query
- * monitoring rules and available metrics, see search_path, require_ssl
, use_fips_ssl
,
+ * and query monitoring metrics that let you define performance boundaries. For
+ * more information about query monitoring rules and available metrics, see Query
* monitoring metrics for Amazon Redshift Serverless.
auto_mv
, datestyle
,
* enable_case_sensitive_identifier
,
* enable_user_activity_logging
, query_group
,
- * search_path
, require_ssl
, and query monitoring metrics
- * that let you define performance boundaries. For more information about query
- * monitoring rules and available metrics, see search_path, require_ssl
, use_fips_ssl
,
+ * and query monitoring metrics that let you define performance boundaries. For
+ * more information about query monitoring rules and available metrics, see
* Query monitoring metrics for Amazon Redshift Serverless.
*/
@@ -80,9 +80,9 @@ namespace Model
* options are auto_mv
, datestyle
,
* enable_case_sensitive_identifier
,
* enable_user_activity_logging
, query_group
,
- * search_path
, require_ssl
, and query monitoring metrics
- * that let you define performance boundaries. For more information about query
- * monitoring rules and available metrics, see search_path, require_ssl
, use_fips_ssl
,
+ * and query monitoring metrics that let you define performance boundaries. For
+ * more information about query monitoring rules and available metrics, see
* Query monitoring metrics for Amazon Redshift Serverless.
*/
@@ -93,9 +93,9 @@ namespace Model
* options are auto_mv
, datestyle
,
* enable_case_sensitive_identifier
,
* enable_user_activity_logging
, query_group
,
- * search_path
, require_ssl
, and query monitoring metrics
- * that let you define performance boundaries. For more information about query
- * monitoring rules and available metrics, see search_path, require_ssl
, use_fips_ssl
,
+ * and query monitoring metrics that let you define performance boundaries. For
+ * more information about query monitoring rules and available metrics, see
* Query monitoring metrics for Amazon Redshift Serverless.
*/
@@ -106,9 +106,9 @@ namespace Model
* options are auto_mv
, datestyle
,
* enable_case_sensitive_identifier
,
* enable_user_activity_logging
, query_group
,
- * search_path
, require_ssl
, and query monitoring metrics
- * that let you define performance boundaries. For more information about query
- * monitoring rules and available metrics, see search_path, require_ssl
, use_fips_ssl
,
+ * and query monitoring metrics that let you define performance boundaries. For
+ * more information about query monitoring rules and available metrics, see
* Query monitoring metrics for Amazon Redshift Serverless.
*/
@@ -119,9 +119,9 @@ namespace Model
* options are auto_mv
, datestyle
,
* enable_case_sensitive_identifier
,
* enable_user_activity_logging
, query_group
,
- * search_path
, require_ssl
, and query monitoring metrics
- * that let you define performance boundaries. For more information about query
- * monitoring rules and available metrics, see search_path, require_ssl
, use_fips_ssl
,
+ * and query monitoring metrics that let you define performance boundaries. For
+ * more information about query monitoring rules and available metrics, see
* Query monitoring metrics for Amazon Redshift Serverless.
*/
@@ -132,9 +132,9 @@ namespace Model
* options are auto_mv
, datestyle
,
* enable_case_sensitive_identifier
,
* enable_user_activity_logging
, query_group
,
- * search_path
, require_ssl
, and query monitoring metrics
- * that let you define performance boundaries. For more information about query
- * monitoring rules and available metrics, see search_path, require_ssl
, use_fips_ssl
,
+ * and query monitoring metrics that let you define performance boundaries. For
+ * more information about query monitoring rules and available metrics, see
* Query monitoring metrics for Amazon Redshift Serverless.
*/
@@ -145,9 +145,9 @@ namespace Model
* options are auto_mv
, datestyle
,
* enable_case_sensitive_identifier
,
* enable_user_activity_logging
, query_group
,
- * search_path
, require_ssl
, and query monitoring metrics
- * that let you define performance boundaries. For more information about query
- * monitoring rules and available metrics, see search_path, require_ssl
, use_fips_ssl
,
+ * and query monitoring metrics that let you define performance boundaries. For
+ * more information about query monitoring rules and available metrics, see
* Query monitoring metrics for Amazon Redshift Serverless.
*/
@@ -158,9 +158,9 @@ namespace Model
* options are auto_mv
, datestyle
,
* enable_case_sensitive_identifier
,
* enable_user_activity_logging
, query_group
,
- * search_path
, require_ssl
, and query monitoring metrics
- * that let you define performance boundaries. For more information about query
- * monitoring rules and available metrics, see search_path, require_ssl
, use_fips_ssl
,
+ * and query monitoring metrics that let you define performance boundaries. For
+ * more information about query monitoring rules and available metrics, see
* Query monitoring metrics for Amazon Redshift Serverless.
*/
diff --git a/generated/src/aws-cpp-sdk-redshift-serverless/include/aws/redshift-serverless/model/ListScheduledActionsResult.h b/generated/src/aws-cpp-sdk-redshift-serverless/include/aws/redshift-serverless/model/ListScheduledActionsResult.h
index 772307a6acc..06946832de9 100644
--- a/generated/src/aws-cpp-sdk-redshift-serverless/include/aws/redshift-serverless/model/ListScheduledActionsResult.h
+++ b/generated/src/aws-cpp-sdk-redshift-serverless/include/aws/redshift-serverless/model/ListScheduledActionsResult.h
@@ -7,6 +7,7 @@
#include All of the returned scheduled action objects.
+ *All of the returned scheduled action association objects.
*/ - inline const Aws::VectorAll of the returned scheduled action objects.
+ *All of the returned scheduled action association objects.
*/ - inline void SetScheduledActions(const Aws::VectorAll of the returned scheduled action objects.
+ *All of the returned scheduled action association objects.
*/ - inline void SetScheduledActions(Aws::VectorAll of the returned scheduled action objects.
+ *All of the returned scheduled action association objects.
*/ - inline ListScheduledActionsResult& WithScheduledActions(const Aws::VectorAll of the returned scheduled action objects.
+ *All of the returned scheduled action association objects.
*/ - inline ListScheduledActionsResult& WithScheduledActions(Aws::VectorAll of the returned scheduled action objects.
+ *All of the returned scheduled action association objects.
*/ - inline ListScheduledActionsResult& AddScheduledActions(const Aws::String& value) { m_scheduledActions.push_back(value); return *this; } + inline ListScheduledActionsResult& AddScheduledActions(const ScheduledActionAssociation& value) { m_scheduledActions.push_back(value); return *this; } /** - *All of the returned scheduled action objects.
+ *All of the returned scheduled action association objects.
*/ - inline ListScheduledActionsResult& AddScheduledActions(Aws::String&& value) { m_scheduledActions.push_back(std::move(value)); return *this; } - - /** - *All of the returned scheduled action objects.
- */ - inline ListScheduledActionsResult& AddScheduledActions(const char* value) { m_scheduledActions.push_back(value); return *this; } + inline ListScheduledActionsResult& AddScheduledActions(ScheduledActionAssociation&& value) { m_scheduledActions.push_back(std::move(value)); return *this; } @@ -149,7 +145,7 @@ namespace Model Aws::String m_nextToken; - Aws::VectorContains names of objects associated with a scheduled action.
Name of associated Amazon Redshift Serverless namespace.
+ */ + inline const Aws::String& GetNamespaceName() const{ return m_namespaceName; } + + /** + *Name of associated Amazon Redshift Serverless namespace.
+ */ + inline bool NamespaceNameHasBeenSet() const { return m_namespaceNameHasBeenSet; } + + /** + *Name of associated Amazon Redshift Serverless namespace.
+ */ + inline void SetNamespaceName(const Aws::String& value) { m_namespaceNameHasBeenSet = true; m_namespaceName = value; } + + /** + *Name of associated Amazon Redshift Serverless namespace.
+ */ + inline void SetNamespaceName(Aws::String&& value) { m_namespaceNameHasBeenSet = true; m_namespaceName = std::move(value); } + + /** + *Name of associated Amazon Redshift Serverless namespace.
+ */ + inline void SetNamespaceName(const char* value) { m_namespaceNameHasBeenSet = true; m_namespaceName.assign(value); } + + /** + *Name of associated Amazon Redshift Serverless namespace.
+ */ + inline ScheduledActionAssociation& WithNamespaceName(const Aws::String& value) { SetNamespaceName(value); return *this;} + + /** + *Name of associated Amazon Redshift Serverless namespace.
+ */ + inline ScheduledActionAssociation& WithNamespaceName(Aws::String&& value) { SetNamespaceName(std::move(value)); return *this;} + + /** + *Name of associated Amazon Redshift Serverless namespace.
+ */ + inline ScheduledActionAssociation& WithNamespaceName(const char* value) { SetNamespaceName(value); return *this;} + + + /** + *Name of associated scheduled action.
+ */ + inline const Aws::String& GetScheduledActionName() const{ return m_scheduledActionName; } + + /** + *Name of associated scheduled action.
+ */ + inline bool ScheduledActionNameHasBeenSet() const { return m_scheduledActionNameHasBeenSet; } + + /** + *Name of associated scheduled action.
+ */ + inline void SetScheduledActionName(const Aws::String& value) { m_scheduledActionNameHasBeenSet = true; m_scheduledActionName = value; } + + /** + *Name of associated scheduled action.
+ */ + inline void SetScheduledActionName(Aws::String&& value) { m_scheduledActionNameHasBeenSet = true; m_scheduledActionName = std::move(value); } + + /** + *Name of associated scheduled action.
+ */ + inline void SetScheduledActionName(const char* value) { m_scheduledActionNameHasBeenSet = true; m_scheduledActionName.assign(value); } + + /** + *Name of associated scheduled action.
+ */ + inline ScheduledActionAssociation& WithScheduledActionName(const Aws::String& value) { SetScheduledActionName(value); return *this;} + + /** + *Name of associated scheduled action.
+ */ + inline ScheduledActionAssociation& WithScheduledActionName(Aws::String&& value) { SetScheduledActionName(std::move(value)); return *this;} + + /** + *Name of associated scheduled action.
+ */ + inline ScheduledActionAssociation& WithScheduledActionName(const char* value) { SetScheduledActionName(value); return *this;} + + private: + + Aws::String m_namespaceName; + bool m_namespaceNameHasBeenSet = false; + + Aws::String m_scheduledActionName; + bool m_scheduledActionNameHasBeenSet = false; + }; + +} // namespace Model +} // namespace RedshiftServerless +} // namespace Aws diff --git a/generated/src/aws-cpp-sdk-redshift-serverless/include/aws/redshift-serverless/model/UpdateWorkgroupRequest.h b/generated/src/aws-cpp-sdk-redshift-serverless/include/aws/redshift-serverless/model/UpdateWorkgroupRequest.h index 384df289b80..66ec1e9d355 100644 --- a/generated/src/aws-cpp-sdk-redshift-serverless/include/aws/redshift-serverless/model/UpdateWorkgroupRequest.h +++ b/generated/src/aws-cpp-sdk-redshift-serverless/include/aws/redshift-serverless/model/UpdateWorkgroupRequest.h @@ -62,9 +62,9 @@ namespace Model * options areauto_mv
, datestyle
,
* enable_case_sensitive_identifier
,
* enable_user_activity_logging
, query_group
,
- * search_path
, require_ssl
, and query monitoring metrics
- * that let you define performance boundaries. For more information about query
- * monitoring rules and available metrics, see search_path, require_ssl
, use_fips_ssl
,
+ * and query monitoring metrics that let you define performance boundaries. For
+ * more information about query monitoring rules and available metrics, see
* Query monitoring metrics for Amazon Redshift Serverless.
*/
@@ -75,9 +75,9 @@ namespace Model
* options are auto_mv
, datestyle
,
* enable_case_sensitive_identifier
,
* enable_user_activity_logging
, query_group
,
- * search_path
, require_ssl
, and query monitoring metrics
- * that let you define performance boundaries. For more information about query
- * monitoring rules and available metrics, see search_path, require_ssl
, use_fips_ssl
,
+ * and query monitoring metrics that let you define performance boundaries. For
+ * more information about query monitoring rules and available metrics, see
* Query monitoring metrics for Amazon Redshift Serverless.
*/
@@ -88,9 +88,9 @@ namespace Model
* options are auto_mv
, datestyle
,
* enable_case_sensitive_identifier
,
* enable_user_activity_logging
, query_group
,
- * search_path
, require_ssl
, and query monitoring metrics
- * that let you define performance boundaries. For more information about query
- * monitoring rules and available metrics, see search_path, require_ssl
, use_fips_ssl
,
+ * and query monitoring metrics that let you define performance boundaries. For
+ * more information about query monitoring rules and available metrics, see
* Query monitoring metrics for Amazon Redshift Serverless.
*/
@@ -101,9 +101,9 @@ namespace Model
* options are auto_mv
, datestyle
,
* enable_case_sensitive_identifier
,
* enable_user_activity_logging
, query_group
,
- * search_path
, require_ssl
, and query monitoring metrics
- * that let you define performance boundaries. For more information about query
- * monitoring rules and available metrics, see search_path, require_ssl
, use_fips_ssl
,
+ * and query monitoring metrics that let you define performance boundaries. For
+ * more information about query monitoring rules and available metrics, see
* Query monitoring metrics for Amazon Redshift Serverless.
*/
@@ -114,9 +114,9 @@ namespace Model
* options are auto_mv
, datestyle
,
* enable_case_sensitive_identifier
,
* enable_user_activity_logging
, query_group
,
- * search_path
, require_ssl
, and query monitoring metrics
- * that let you define performance boundaries. For more information about query
- * monitoring rules and available metrics, see search_path, require_ssl
, use_fips_ssl
,
+ * and query monitoring metrics that let you define performance boundaries. For
+ * more information about query monitoring rules and available metrics, see
* Query monitoring metrics for Amazon Redshift Serverless.
*/
@@ -127,9 +127,9 @@ namespace Model
* options are auto_mv
, datestyle
,
* enable_case_sensitive_identifier
,
* enable_user_activity_logging
, query_group
,
- * search_path
, require_ssl
, and query monitoring metrics
- * that let you define performance boundaries. For more information about query
- * monitoring rules and available metrics, see search_path, require_ssl
, use_fips_ssl
,
+ * and query monitoring metrics that let you define performance boundaries. For
+ * more information about query monitoring rules and available metrics, see
* Query monitoring metrics for Amazon Redshift Serverless.
*/
@@ -140,9 +140,9 @@ namespace Model
* options are auto_mv
, datestyle
,
* enable_case_sensitive_identifier
,
* enable_user_activity_logging
, query_group
,
- * search_path
, require_ssl
, and query monitoring metrics
- * that let you define performance boundaries. For more information about query
- * monitoring rules and available metrics, see search_path, require_ssl
, use_fips_ssl
,
+ * and query monitoring metrics that let you define performance boundaries. For
+ * more information about query monitoring rules and available metrics, see
* Query monitoring metrics for Amazon Redshift Serverless.
*/
@@ -153,9 +153,9 @@ namespace Model
* options are auto_mv
, datestyle
,
* enable_case_sensitive_identifier
,
* enable_user_activity_logging
, query_group
,
- * search_path
, require_ssl
, and query monitoring metrics
- * that let you define performance boundaries. For more information about query
- * monitoring rules and available metrics, see search_path, require_ssl
, use_fips_ssl
,
+ * and query monitoring metrics that let you define performance boundaries. For
+ * more information about query monitoring rules and available metrics, see
* Query monitoring metrics for Amazon Redshift Serverless.
*/
diff --git a/generated/src/aws-cpp-sdk-redshift-serverless/include/aws/redshift-serverless/model/Workgroup.h b/generated/src/aws-cpp-sdk-redshift-serverless/include/aws/redshift-serverless/model/Workgroup.h
index 1e3e1e69fa5..d29ede6627b 100644
--- a/generated/src/aws-cpp-sdk-redshift-serverless/include/aws/redshift-serverless/model/Workgroup.h
+++ b/generated/src/aws-cpp-sdk-redshift-serverless/include/aws/redshift-serverless/model/Workgroup.h
@@ -73,9 +73,9 @@ namespace Model
* options are auto_mv
, datestyle
,
* enable_case_sensitive_identifier
,
* enable_user_activity_logging
, query_group
,
- * search_path
, require_ssl
, and query monitoring metrics
- * that let you define performance boundaries. For more information about query
- * monitoring rules and available metrics, see search_path, require_ssl
, use_fips_ssl
,
+ * and query monitoring metrics that let you define performance boundaries. For
+ * more information about query monitoring rules and available metrics, see
* Query monitoring metrics for Amazon Redshift Serverless.
*/
@@ -86,9 +86,9 @@ namespace Model
* options are auto_mv
, datestyle
,
* enable_case_sensitive_identifier
,
* enable_user_activity_logging
, query_group
,
- * search_path
, require_ssl
, and query monitoring metrics
- * that let you define performance boundaries. For more information about query
- * monitoring rules and available metrics, see search_path, require_ssl
, use_fips_ssl
,
+ * and query monitoring metrics that let you define performance boundaries. For
+ * more information about query monitoring rules and available metrics, see
* Query monitoring metrics for Amazon Redshift Serverless.
*/
@@ -99,9 +99,9 @@ namespace Model
* options are auto_mv
, datestyle
,
* enable_case_sensitive_identifier
,
* enable_user_activity_logging
, query_group
,
- * search_path
, require_ssl
, and query monitoring metrics
- * that let you define performance boundaries. For more information about query
- * monitoring rules and available metrics, see search_path, require_ssl
, use_fips_ssl
,
+ * and query monitoring metrics that let you define performance boundaries. For
+ * more information about query monitoring rules and available metrics, see
* Query monitoring metrics for Amazon Redshift Serverless.
*/
@@ -112,9 +112,9 @@ namespace Model
* options are auto_mv
, datestyle
,
* enable_case_sensitive_identifier
,
* enable_user_activity_logging
, query_group
,
- * search_path
, require_ssl
, and query monitoring metrics
- * that let you define performance boundaries. For more information about query
- * monitoring rules and available metrics, see search_path, require_ssl
, use_fips_ssl
,
+ * and query monitoring metrics that let you define performance boundaries. For
+ * more information about query monitoring rules and available metrics, see
* Query monitoring metrics for Amazon Redshift Serverless.
*/
@@ -125,9 +125,9 @@ namespace Model
* options are auto_mv
, datestyle
,
* enable_case_sensitive_identifier
,
* enable_user_activity_logging
, query_group
,
- * search_path
, require_ssl
, and query monitoring metrics
- * that let you define performance boundaries. For more information about query
- * monitoring rules and available metrics, see search_path, require_ssl
, use_fips_ssl
,
+ * and query monitoring metrics that let you define performance boundaries. For
+ * more information about query monitoring rules and available metrics, see
* Query monitoring metrics for Amazon Redshift Serverless.
*/
@@ -138,9 +138,9 @@ namespace Model
* options are auto_mv
, datestyle
,
* enable_case_sensitive_identifier
,
* enable_user_activity_logging
, query_group
,
- * search_path
, require_ssl
, and query monitoring metrics
- * that let you define performance boundaries. For more information about query
- * monitoring rules and available metrics, see search_path, require_ssl
, use_fips_ssl
,
+ * and query monitoring metrics that let you define performance boundaries. For
+ * more information about query monitoring rules and available metrics, see
* Query monitoring metrics for Amazon Redshift Serverless.
*/
@@ -151,9 +151,9 @@ namespace Model
* options are auto_mv
, datestyle
,
* enable_case_sensitive_identifier
,
* enable_user_activity_logging
, query_group
,
- * search_path
, require_ssl
, and query monitoring metrics
- * that let you define performance boundaries. For more information about query
- * monitoring rules and available metrics, see search_path, require_ssl
, use_fips_ssl
,
+ * and query monitoring metrics that let you define performance boundaries. For
+ * more information about query monitoring rules and available metrics, see
* Query monitoring metrics for Amazon Redshift Serverless.
*/
@@ -164,9 +164,9 @@ namespace Model
* options are auto_mv
, datestyle
,
* enable_case_sensitive_identifier
,
* enable_user_activity_logging
, query_group
,
- * search_path
, require_ssl
, and query monitoring metrics
- * that let you define performance boundaries. For more information about query
- * monitoring rules and available metrics, see search_path, require_ssl
, use_fips_ssl
,
+ * and query monitoring metrics that let you define performance boundaries. For
+ * more information about query monitoring rules and available metrics, see
* Query monitoring metrics for Amazon Redshift Serverless.
*/
@@ -599,25 +599,25 @@ namespace Model
/**
* A value that specifies whether the workgroup can be accessible from a public - * network
+ * network. */ inline bool GetPubliclyAccessible() const{ return m_publiclyAccessible; } /** *A value that specifies whether the workgroup can be accessible from a public - * network
+ * network. */ inline bool PubliclyAccessibleHasBeenSet() const { return m_publiclyAccessibleHasBeenSet; } /** *A value that specifies whether the workgroup can be accessible from a public - * network
+ * network. */ inline void SetPubliclyAccessible(bool value) { m_publiclyAccessibleHasBeenSet = true; m_publiclyAccessible = value; } /** *A value that specifies whether the workgroup can be accessible from a public - * network
+ * network. */ inline Workgroup& WithPubliclyAccessible(bool value) { SetPubliclyAccessible(value); return *this;} diff --git a/generated/src/aws-cpp-sdk-redshift-serverless/source/model/ListScheduledActionsResult.cpp b/generated/src/aws-cpp-sdk-redshift-serverless/source/model/ListScheduledActionsResult.cpp index 2e89b08fbd0..02e84509e4c 100644 --- a/generated/src/aws-cpp-sdk-redshift-serverless/source/model/ListScheduledActionsResult.cpp +++ b/generated/src/aws-cpp-sdk-redshift-serverless/source/model/ListScheduledActionsResult.cpp @@ -40,7 +40,7 @@ ListScheduledActionsResult& ListScheduledActionsResult::operator =(const Aws::Am Aws::Utils::ArrayReturns the resource-based policy document attached to the resource, which can be a table or stream, in JSON format.
GetResourcePolicy
follows an eventually consistent model. The following list describes the outcomes when you issue the GetResourcePolicy
request immediately after issuing another request:
If you issue a GetResourcePolicy
request immediately after a PutResourcePolicy
request, DynamoDB might return a PolicyNotFoundException
.
If you issue a GetResourcePolicy
request immediately after a DeleteResourcePolicy
request, DynamoDB might return the policy that was present before the deletion request.
If you issue a GetResourcePolicy
request immediately after a CreateTable
request, which includes a resource-based policy, DynamoDB might return a ResourceNotFoundException
or a PolicyNotFoundException
.
Because GetResourcePolicy
uses an eventually consistent query, the metadata for your policy or table might not be available at that moment. Wait for a few seconds, and then retry the GetResourcePolicy
request.
After a GetResourcePolicy
request returns a policy created using the PutResourcePolicy
request, you can assume the policy will start getting applied in the authorization of requests to the resource. Because this process is eventually consistent, it will take some time to apply the policy to all requests to a resource. Policies that you attach while creating a table using the CreateTable
request will always be applied to all requests for that table.
Returns the resource-based policy document attached to the resource, which can be a table or stream, in JSON format.
GetResourcePolicy
follows an eventually consistent model. The following list describes the outcomes when you issue the GetResourcePolicy
request immediately after issuing another request:
If you issue a GetResourcePolicy
request immediately after a PutResourcePolicy
request, DynamoDB might return a PolicyNotFoundException
.
If you issue a GetResourcePolicy
request immediately after a DeleteResourcePolicy
request, DynamoDB might return the policy that was present before the deletion request.
If you issue a GetResourcePolicy
request immediately after a CreateTable
request, which includes a resource-based policy, DynamoDB might return a ResourceNotFoundException
or a PolicyNotFoundException
.
Because GetResourcePolicy
uses an eventually consistent query, the metadata for your policy or table might not be available at that moment. Wait for a few seconds, and then retry the GetResourcePolicy
request.
After a GetResourcePolicy
request returns a policy created using the PutResourcePolicy
request, the policy will be applied in the authorization of requests to the resource. Because this process is eventually consistent, it will take some time to apply the policy to all requests to a resource. Policies that you attach while creating a table using the CreateTable
request will always be applied to all requests for that table.
Attaches a resource-based policy document to the resource, which can be a table or stream. When you attach a resource-based policy using this API, the policy application is eventually consistent .
PutResourcePolicy
is an idempotent operation; running it multiple times on the same resource using the same policy document will return the same revision ID. If you specify an ExpectedRevisionId
which doesn't match the current policy's RevisionId
, the PolicyNotFoundException
will be returned.
PutResourcePolicy
is an asynchronous operation. If you issue a GetResourcePolicy
request immediately after a PutResourcePolicy
request, DynamoDB might return your previous policy, if there was one, or return the PolicyNotFoundException
. This is because GetResourcePolicy
uses an eventually consistent query, and the metadata for your policy or table might not be available at that moment. Wait for a few seconds, and then try the GetResourcePolicy
request again.
Attaches a resource-based policy document to the resource, which can be a table or stream. When you attach a resource-based policy using this API, the policy application is eventually consistent .
PutResourcePolicy
is an idempotent operation; running it multiple times on the same resource using the same policy document will return the same revision ID. If you specify an ExpectedRevisionId
that doesn't match the current policy's RevisionId
, the PolicyNotFoundException
will be returned.
PutResourcePolicy
is an asynchronous operation. If you issue a GetResourcePolicy
request immediately after a PutResourcePolicy
request, DynamoDB might return your previous policy, if there was one, or return the PolicyNotFoundException
. This is because GetResourcePolicy
uses an eventually consistent query, and the metadata for your policy or table might not be available at that moment. Wait for a few seconds, and then try the GetResourcePolicy
request again.
Represents the provisioned throughput settings for the specified global secondary index.
For current minimum and maximum provisioned throughput values, see Service, Account, and Table Quotas in the Amazon DynamoDB Developer Guide.
" + }, + "OnDemandThroughput":{ + "shape":"OnDemandThroughput", + "documentation":"The maximum number of read and write units for the global secondary index being created. If you use this parameter, you must specify MaxReadRequestUnits
, MaxWriteRequestUnits
, or both.
Represents a new global secondary index to be added to an existing table.
" @@ -2028,6 +2032,10 @@ "shape":"ProvisionedThroughputOverride", "documentation":"Replica-specific provisioned throughput. If not specified, uses the source table's provisioned throughput settings.
" }, + "OnDemandThroughputOverride":{ + "shape":"OnDemandThroughputOverride", + "documentation":"The maximum on-demand throughput settings for the specified replica table being created. You can only modify MaxReadRequestUnits
, because you can't modify MaxWriteRequestUnits
for individual replica tables.
Replica-specific global secondary index settings.
" @@ -2097,7 +2105,11 @@ }, "ResourcePolicy":{ "shape":"ResourcePolicy", - "documentation":"An Amazon Web Services resource-based policy document in JSON format that will be attached to the table.
When you attach a resource-based policy while creating a table, the policy creation is strongly consistent.
The maximum size supported for a resource-based policy document is 20 KB. DynamoDB counts whitespaces when calculating the size of a policy against this limit. You can’t request an increase for this limit. For a full list of all considerations that you should keep in mind while attaching a resource-based policy, see Resource-based policy considerations.
" + "documentation":"An Amazon Web Services resource-based policy document in JSON format that will be attached to the table.
When you attach a resource-based policy while creating a table, the policy application is strongly consistent.
The maximum size supported for a resource-based policy document is 20 KB. DynamoDB counts whitespaces when calculating the size of a policy against this limit. For a full list of all considerations that apply for resource-based policies, see Resource-based policy considerations.
" + }, + "OnDemandThroughput":{ + "shape":"OnDemandThroughput", + "documentation":"Sets the maximum number of read and write units for the specified table in on-demand capacity mode. If you use this parameter, you must specify MaxReadRequestUnits
, MaxWriteRequestUnits
, or both.
Represents the input of a CreateTable
operation.
A unique string that represents the revision ID of the policy. If you are comparing revision IDs, make sure to always use string comparison logic.
This value will be empty if you make a request against a resource without a policy.
" + "documentation":"A unique string that represents the revision ID of the policy. If you're comparing revision IDs, make sure to always use string comparison logic.
This value will be empty if you make a request against a resource without a policy.
" } } }, @@ -3169,7 +3181,7 @@ }, "RevisionId":{ "shape":"PolicyRevisionId", - "documentation":"A unique string that represents the revision ID of the policy. If you are comparing revision IDs, make sure to always use string comparison logic.
" + "documentation":"A unique string that represents the revision ID of the policy. If you're comparing revision IDs, make sure to always use string comparison logic.
" } } }, @@ -3196,6 +3208,10 @@ "ProvisionedThroughput":{ "shape":"ProvisionedThroughput", "documentation":"Represents the provisioned throughput settings for the specified global secondary index.
For current minimum and maximum provisioned throughput values, see Service, Account, and Table Quotas in the Amazon DynamoDB Developer Guide.
" + }, + "OnDemandThroughput":{ + "shape":"OnDemandThroughput", + "documentation":"The maximum number of read and write units for the specified global secondary index. If you use this parameter, you must specify MaxReadRequestUnits
, MaxWriteRequestUnits
, or both.
Represents the properties of a global secondary index.
" @@ -3254,6 +3270,10 @@ "IndexArn":{ "shape":"String", "documentation":"The Amazon Resource Name (ARN) that uniquely identifies the index.
" + }, + "OnDemandThroughput":{ + "shape":"OnDemandThroughput", + "documentation":"The maximum number of read and write units for the specified global secondary index. If you use this parameter, you must specify MaxReadRequestUnits
, MaxWriteRequestUnits
, or both.
Represents the properties of a global secondary index.
" @@ -3280,7 +3300,8 @@ "ProvisionedThroughput":{ "shape":"ProvisionedThroughput", "documentation":"Represents the provisioned throughput settings for the specified global secondary index.
" - } + }, + "OnDemandThroughput":{"shape":"OnDemandThroughput"} }, "documentation":"Represents the properties of a global secondary index for the table when the backup was created.
" }, @@ -4308,6 +4329,30 @@ "type":"list", "member":{"shape":"NumberAttributeValue"} }, + "OnDemandThroughput":{ + "type":"structure", + "members":{ + "MaxReadRequestUnits":{ + "shape":"LongObject", + "documentation":"Maximum number of read request units for the specified table.
To specify a maximum OnDemandThroughput
on your table, set the value of MaxReadRequestUnits
as greater than or equal to 1. To remove the maximum OnDemandThroughput
that is currently set on your table, set the value of MaxReadRequestUnits
to -1.
Maximum number of write request units for the specified table.
To specify a maximum OnDemandThroughput
on your table, set the value of MaxWriteRequestUnits
as greater than or equal to 1. To remove the maximum OnDemandThroughput
that is currently set on your table, set the value of MaxWriteRequestUnits
to -1.
Sets the maximum number of read and write units for the specified on-demand table. If you use this parameter, you must specify MaxReadRequestUnits
, MaxWriteRequestUnits
, or both.
Maximum number of read request units for the specified replica table.
" + } + }, + "documentation":"Overrides the on-demand throughput settings for this replica table. If you don't specify a value for this parameter, it uses the source table's on-demand throughput settings.
" + }, "ParameterizedStatement":{ "type":"structure", "required":["Statement"], @@ -4647,11 +4692,11 @@ }, "Policy":{ "shape":"ResourcePolicy", - "documentation":"An Amazon Web Services resource-based policy document in JSON format.
The maximum size supported for a resource-based policy document is 20 KB. DynamoDB counts whitespaces when calculating the size of a policy against this limit. For a full list of all considerations that you should keep in mind while attaching a resource-based policy, see Resource-based policy considerations.
" + "documentation":"An Amazon Web Services resource-based policy document in JSON format.
The maximum size supported for a resource-based policy document is 20 KB. DynamoDB counts whitespaces when calculating the size of a policy against this limit.
Within a resource-based policy, if the action for a DynamoDB service-linked role (SLR) to replicate data for a global table is denied, adding or deleting a replica will fail with an error.
For a full list of all considerations that apply while attaching a resource-based policy, see Resource-based policy considerations.
" }, "ExpectedRevisionId":{ "shape":"PolicyRevisionId", - "documentation":"A string value that you can use to conditionally update your policy. You can provide the revision ID of your existing policy to make mutating requests against that policy. When you provide an expected revision ID, if the revision ID of the existing policy on the resource doesn't match or if there's no policy attached to the resource, your request will be rejected with a PolicyNotFoundException
.
To conditionally put a policy when no policy exists for the resource, specify NO_POLICY
for the revision ID.
A string value that you can use to conditionally update your policy. You can provide the revision ID of your existing policy to make mutating requests against that policy.
When you provide an expected revision ID, if the revision ID of the existing policy on the resource doesn't match or if there's no policy attached to the resource, your request will be rejected with a PolicyNotFoundException
.
To conditionally attach a policy when no policy exists for the resource, specify NO_POLICY
for the revision ID.
A unique string that represents the revision ID of the policy. If you are comparing revision IDs, make sure to always use string comparison logic.
" + "documentation":"A unique string that represents the revision ID of the policy. If you're comparing revision IDs, make sure to always use string comparison logic.
" } } }, @@ -4857,6 +4902,10 @@ "shape":"ProvisionedThroughputOverride", "documentation":"Replica-specific provisioned throughput. If not described, uses the source table's provisioned throughput settings.
" }, + "OnDemandThroughputOverride":{ + "shape":"OnDemandThroughputOverride", + "documentation":"Overrides the maximum on-demand throughput settings for the specified replica table.
" + }, "GlobalSecondaryIndexes":{ "shape":"ReplicaGlobalSecondaryIndexDescriptionList", "documentation":"Replica-specific global secondary index settings.
" @@ -4884,6 +4933,10 @@ "ProvisionedThroughputOverride":{ "shape":"ProvisionedThroughputOverride", "documentation":"Replica table GSI-specific provisioned throughput. If not specified, uses the source table GSI's read capacity settings.
" + }, + "OnDemandThroughputOverride":{ + "shape":"OnDemandThroughputOverride", + "documentation":"Overrides the maximum on-demand throughput settings for the specified global secondary index in the specified replica table.
" } }, "documentation":"Represents the properties of a replica global secondary index.
" @@ -4933,6 +4986,10 @@ "ProvisionedThroughputOverride":{ "shape":"ProvisionedThroughputOverride", "documentation":"If not described, uses the source table GSI's read capacity settings.
" + }, + "OnDemandThroughputOverride":{ + "shape":"OnDemandThroughputOverride", + "documentation":"Overrides the maximum on-demand throughput for the specified global secondary index in the specified replica table.
" } }, "documentation":"Represents the properties of a replica global secondary index.
" @@ -5244,6 +5301,7 @@ "shape":"ProvisionedThroughput", "documentation":"Provisioned throughput settings for the restored table.
" }, + "OnDemandThroughputOverride":{"shape":"OnDemandThroughput"}, "SSESpecificationOverride":{ "shape":"SSESpecification", "documentation":"The new server-side encryption settings for the restored table.
" @@ -5299,6 +5357,7 @@ "shape":"ProvisionedThroughput", "documentation":"Provisioned throughput settings for the restored table.
" }, + "OnDemandThroughputOverride":{"shape":"OnDemandThroughput"}, "SSESpecificationOverride":{ "shape":"SSESpecification", "documentation":"The new server-side encryption settings for the restored table.
" @@ -5613,6 +5672,7 @@ "shape":"ProvisionedThroughput", "documentation":"Read IOPs and Write IOPS on the table when the backup was created.
" }, + "OnDemandThroughput":{"shape":"OnDemandThroughput"}, "ItemCount":{ "shape":"ItemCount", "documentation":"Number of items in the table. Note that this is an approximate value.
" @@ -5764,6 +5824,7 @@ "documentation":"The billing mode for provisioning the table created as part of the import operation.
" }, "ProvisionedThroughput":{"shape":"ProvisionedThroughput"}, + "OnDemandThroughput":{"shape":"OnDemandThroughput"}, "SSESpecification":{"shape":"SSESpecification"}, "GlobalSecondaryIndexes":{ "shape":"GlobalSecondaryIndexList", @@ -5866,6 +5927,10 @@ "DeletionProtectionEnabled":{ "shape":"DeletionProtectionEnabled", "documentation":"Indicates whether deletion protection is enabled (true) or disabled (false) on the table.
" + }, + "OnDemandThroughput":{ + "shape":"OnDemandThroughput", + "documentation":"The maximum number of read and write units for the specified on-demand table. If you use this parameter, you must specify MaxReadRequestUnits
, MaxWriteRequestUnits
, or both.
Represents the properties of a table.
" @@ -6270,10 +6335,7 @@ "UpdateExpression":{"type":"string"}, "UpdateGlobalSecondaryIndexAction":{ "type":"structure", - "required":[ - "IndexName", - "ProvisionedThroughput" - ], + "required":["IndexName"], "members":{ "IndexName":{ "shape":"IndexName", @@ -6282,6 +6344,10 @@ "ProvisionedThroughput":{ "shape":"ProvisionedThroughput", "documentation":"Represents the provisioned throughput settings for the specified global secondary index.
For current minimum and maximum provisioned throughput values, see Service, Account, and Table Quotas in the Amazon DynamoDB Developer Guide.
" + }, + "OnDemandThroughput":{ + "shape":"OnDemandThroughput", + "documentation":"Updates the maximum number of read and write units for the specified global secondary index. If you use this parameter, you must specify MaxReadRequestUnits
, MaxWriteRequestUnits
, or both.
Represents the new provisioned throughput settings to be applied to a global secondary index.
" @@ -6500,6 +6566,10 @@ "shape":"ProvisionedThroughputOverride", "documentation":"Replica-specific provisioned throughput. If not specified, uses the source table's provisioned throughput settings.
" }, + "OnDemandThroughputOverride":{ + "shape":"OnDemandThroughputOverride", + "documentation":"Overrides the maximum on-demand throughput for the replica table.
" + }, "GlobalSecondaryIndexes":{ "shape":"ReplicaGlobalSecondaryIndexList", "documentation":"Replica-specific global secondary index settings.
" @@ -6554,6 +6624,10 @@ "DeletionProtectionEnabled":{ "shape":"DeletionProtectionEnabled", "documentation":"Indicates whether deletion protection is to be enabled (true) or disabled (false) on the table.
" + }, + "OnDemandThroughput":{ + "shape":"OnDemandThroughput", + "documentation":"Updates the maximum number of read and write units for the specified table in on-demand capacity mode. If you use this parameter, you must specify MaxReadRequestUnits
, MaxWriteRequestUnits
, or both.
Represents the input of an UpdateTable
operation.
Gets the default instance metadata service (IMDS) settings that are set at the account level in the specified Amazon Web Services Region.
For more information, see Order of precedence for instance metadata options in the Amazon EC2 User Guide.
" }, + "GetInstanceTpmEkPub":{ + "name":"GetInstanceTpmEkPub", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetInstanceTpmEkPubRequest"}, + "output":{"shape":"GetInstanceTpmEkPubResult"}, + "documentation":"Gets the public endorsement key associated with the Nitro Trusted Platform Module (NitroTPM) for the specified instance.
" + }, "GetInstanceTypesFromInstanceRequirements":{ "name":"GetInstanceTypesFromInstanceRequirements", "http":{ @@ -27420,6 +27430,24 @@ "locationName":"item" } }, + "EkPubKeyFormat":{ + "type":"string", + "enum":[ + "der", + "tpmt" + ] + }, + "EkPubKeyType":{ + "type":"string", + "enum":[ + "rsa-2048", + "ecc-sec-p384" + ] + }, + "EkPubKeyValue":{ + "type":"string", + "sensitive":true + }, "ElasticGpuAssociation":{ "type":"structure", "members":{ @@ -30838,6 +30866,57 @@ } } }, + "GetInstanceTpmEkPubRequest":{ + "type":"structure", + "required":[ + "InstanceId", + "KeyType", + "KeyFormat" + ], + "members":{ + "InstanceId":{ + "shape":"InstanceId", + "documentation":"The ID of the instance for which to get the public endorsement key.
" + }, + "KeyType":{ + "shape":"EkPubKeyType", + "documentation":"The required public endorsement key type.
" + }, + "KeyFormat":{ + "shape":"EkPubKeyFormat", + "documentation":"The required public endorsement key format. Specify der
for a DER-encoded public key that is compatible with OpenSSL. Specify tpmt
for a TPM 2.0 format that is compatible with tpm2-tools. The returned key is base64 encoded.
Specify this parameter to verify whether the request will succeed, without actually making the request. If the request will succeed, the response is DryRunOperation
. Otherwise, the response is UnauthorizedOperation
.
The ID of the instance.
", + "locationName":"instanceId" + }, + "KeyType":{ + "shape":"EkPubKeyType", + "documentation":"The public endorsement key type.
", + "locationName":"keyType" + }, + "KeyFormat":{ + "shape":"EkPubKeyFormat", + "documentation":"The public endorsement key format.
", + "locationName":"keyFormat" + }, + "KeyValue":{ + "shape":"EkPubKeyValue", + "documentation":"The public endorsement key material.
", + "locationName":"keyValue" + } + } + }, "GetInstanceTypesFromInstanceRequirementsRequest":{ "type":"structure", "required":[ diff --git a/tools/code-generation/api-descriptions/personalize-2018-05-22.normal.json b/tools/code-generation/api-descriptions/personalize-2018-05-22.normal.json index e152ed32d5b..6e547142686 100644 --- a/tools/code-generation/api-descriptions/personalize-2018-05-22.normal.json +++ b/tools/code-generation/api-descriptions/personalize-2018-05-22.normal.json @@ -5,6 +5,7 @@ "endpointPrefix":"personalize", "jsonVersion":"1.1", "protocol":"json", + "protocols":["json"], "serviceFullName":"Amazon Personalize", "serviceId":"Personalize", "signatureVersion":"v4", @@ -68,6 +69,24 @@ "documentation":"You incur campaign costs while it is active. To avoid unnecessary costs, make sure to delete the campaign when you are finished. For information about campaign costs, see Amazon Personalize pricing.
Creates a campaign that deploys a solution version. When a client calls the GetRecommendations and GetPersonalizedRanking APIs, a campaign is specified in the request.
Minimum Provisioned TPS and Auto-Scaling
A high minProvisionedTPS
will increase your cost. We recommend starting with 1 for minProvisionedTPS
(the default). Track your usage using Amazon CloudWatch metrics, and increase the minProvisionedTPS
as necessary.
When you create an Amazon Personalize campaign, you can specify the minimum provisioned transactions per second (minProvisionedTPS
) for the campaign. This is the baseline transaction throughput for the campaign provisioned by Amazon Personalize. It sets the minimum billing charge for the campaign while it is active. A transaction is a single GetRecommendations
or GetPersonalizedRanking
request. The default minProvisionedTPS
is 1.
If your TPS increases beyond the minProvisionedTPS
, Amazon Personalize auto-scales the provisioned capacity up and down, but never below minProvisionedTPS
. There's a short time delay while the capacity is increased that might cause loss of transactions. When your traffic reduces, capacity returns to the minProvisionedTPS
.
You are charged for the the minimum provisioned TPS or, if your requests exceed the minProvisionedTPS
, the actual TPS. The actual TPS is the total number of recommendation requests you make. We recommend starting with a low minProvisionedTPS
, track your usage using Amazon CloudWatch metrics, and then increase the minProvisionedTPS
as necessary.
For more information about campaign costs, see Amazon Personalize pricing.
Status
A campaign can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
DELETE PENDING > DELETE IN_PROGRESS
To get the campaign status, call DescribeCampaign.
Wait until the status
of the campaign is ACTIVE
before asking the campaign for recommendations.
Related APIs
", "idempotent":true }, + "CreateDataDeletionJob":{ + "name":"CreateDataDeletionJob", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateDataDeletionJobRequest"}, + "output":{"shape":"CreateDataDeletionJobResponse"}, + "errors":[ + {"shape":"InvalidInputException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ResourceAlreadyExistsException"}, + {"shape":"LimitExceededException"}, + {"shape":"ResourceInUseException"}, + {"shape":"TooManyTagsException"} + ], + "documentation":"Creates a batch job that deletes all references to specific users from an Amazon Personalize dataset group in batches. You specify the users to delete in a CSV file of userIds in an Amazon S3 bucket. After a job completes, Amazon Personalize no longer trains on the users’ data and no longer considers the users when generating user segments. For more information about creating a data deletion job, see Deleting users.
Your input file must be a CSV file with a single USER_ID column that lists the users IDs. For more information about preparing the CSV file, see Preparing your data deletion file and uploading it to Amazon S3.
To give Amazon Personalize permission to access your input CSV file of userIds, you must specify an IAM service role that has permission to read from the data source. This role needs GetObject
and ListBucket
permissions for the bucket and its content. These permissions are the same as importing data. For information on granting access to your Amazon S3 bucket, see Giving Amazon Personalize Access to Amazon S3 Resources.
After you create a job, it can take up to a day to delete all references to the users from datasets and models. Until the job completes, Amazon Personalize continues to use the data when training. And if you use a User Segmentation recipe, the users might appear in user segments.
Status
A data deletion job can have one of the following statuses:
PENDING > IN_PROGRESS > COMPLETED -or- FAILED
To get the status of the data deletion job, call DescribeDataDeletionJob API operation and specify the Amazon Resource Name (ARN) of the job. If the status is FAILED, the response includes a failureReason
key, which describes why the job failed.
Related APIs
" + }, "CreateDataset":{ "name":"CreateDataset", "http":{ @@ -458,6 +477,21 @@ "documentation":"Describes the given campaign, including its status.
A campaign can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
DELETE PENDING > DELETE IN_PROGRESS
When the status
is CREATE FAILED
, the response includes the failureReason
key, which describes why.
For more information on campaigns, see CreateCampaign.
", "idempotent":true }, + "DescribeDataDeletionJob":{ + "name":"DescribeDataDeletionJob", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeDataDeletionJobRequest"}, + "output":{"shape":"DescribeDataDeletionJobResponse"}, + "errors":[ + {"shape":"InvalidInputException"}, + {"shape":"ResourceNotFoundException"} + ], + "documentation":"Describes the data deletion job created by CreateDataDeletionJob, including the job status.
", + "idempotent":true + }, "DescribeDataset":{ "name":"DescribeDataset", "http":{ @@ -712,6 +746,21 @@ "documentation":"Returns a list of campaigns that use the given solution. When a solution is not specified, all the campaigns associated with the account are listed. The response provides the properties for each campaign, including the Amazon Resource Name (ARN). For more information on campaigns, see CreateCampaign.
", "idempotent":true }, + "ListDataDeletionJobs":{ + "name":"ListDataDeletionJobs", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListDataDeletionJobsRequest"}, + "output":{"shape":"ListDataDeletionJobsResponse"}, + "errors":[ + {"shape":"InvalidInputException"}, + {"shape":"InvalidNextTokenException"} + ], + "documentation":"Returns a list of data deletion jobs for a dataset group ordered by creation time, with the most recent first. When a dataset group is not specified, all the data deletion jobs associated with the account are listed. The response provides the properties for each job, including the Amazon Resource Name (ARN). For more information on data deletion jobs, see Deleting users.
", + "idempotent":true + }, "ListDatasetExportJobs":{ "name":"ListDatasetExportJobs", "http":{ @@ -1789,6 +1838,46 @@ } } }, + "CreateDataDeletionJobRequest":{ + "type":"structure", + "required":[ + "jobName", + "datasetGroupArn", + "dataSource", + "roleArn" + ], + "members":{ + "jobName":{ + "shape":"Name", + "documentation":"The name for the data deletion job.
" + }, + "datasetGroupArn":{ + "shape":"Arn", + "documentation":"The Amazon Resource Name (ARN) of the dataset group that has the datasets you want to delete records from.
" + }, + "dataSource":{ + "shape":"DataSource", + "documentation":"The Amazon S3 bucket that contains the list of userIds of the users to delete.
" + }, + "roleArn":{ + "shape":"RoleArn", + "documentation":"The Amazon Resource Name (ARN) of the IAM role that has permissions to read from the Amazon S3 data source.
" + }, + "tags":{ + "shape":"Tags", + "documentation":"A list of tags to apply to the data deletion job.
" + } + } + }, + "CreateDataDeletionJobResponse":{ + "type":"structure", + "members":{ + "dataDeletionJobArn":{ + "shape":"Arn", + "documentation":"The Amazon Resource Name (ARN) of the data deletion job.
" + } + } + }, "CreateDatasetExportJobRequest":{ "type":"structure", "required":[ @@ -2219,15 +2308,97 @@ } } }, + "DataDeletionJob":{ + "type":"structure", + "members":{ + "jobName":{ + "shape":"Name", + "documentation":"The name of the data deletion job.
" + }, + "dataDeletionJobArn":{ + "shape":"Arn", + "documentation":"The Amazon Resource Name (ARN) of the data deletion job.
" + }, + "datasetGroupArn":{ + "shape":"Arn", + "documentation":"The Amazon Resource Name (ARN) of the dataset group the job deletes records from.
" + }, + "dataSource":{"shape":"DataSource"}, + "roleArn":{ + "shape":"RoleArn", + "documentation":"The Amazon Resource Name (ARN) of the IAM role that has permissions to read from the Amazon S3 data source.
" + }, + "status":{ + "shape":"Status", + "documentation":"The status of the data deletion job.
A data deletion job can have one of the following statuses:
PENDING > IN_PROGRESS > COMPLETED -or- FAILED
The number of records deleted by a COMPLETED job.
" + }, + "creationDateTime":{ + "shape":"Date", + "documentation":"The creation date and time (in Unix time) of the data deletion job.
" + }, + "lastUpdatedDateTime":{ + "shape":"Date", + "documentation":"The date and time (in Unix time) the data deletion job was last updated.
" + }, + "failureReason":{ + "shape":"FailureReason", + "documentation":"If a data deletion job fails, provides the reason why.
" + } + }, + "documentation":"Describes a job that deletes all references to specific users from an Amazon Personalize dataset group in batches. For information about creating a data deletion job, see Deleting users.
" + }, + "DataDeletionJobSummary":{ + "type":"structure", + "members":{ + "dataDeletionJobArn":{ + "shape":"Arn", + "documentation":"The Amazon Resource Name (ARN) of the data deletion job.
" + }, + "datasetGroupArn":{ + "shape":"Arn", + "documentation":"The Amazon Resource Name (ARN) of the dataset group the job deleted records from.
" + }, + "jobName":{ + "shape":"Name", + "documentation":"The name of the data deletion job.
" + }, + "status":{ + "shape":"Status", + "documentation":"The status of the data deletion job.
A data deletion job can have one of the following statuses:
PENDING > IN_PROGRESS > COMPLETED -or- FAILED
The creation date and time (in Unix time) of the data deletion job.
" + }, + "lastUpdatedDateTime":{ + "shape":"Date", + "documentation":"The date and time (in Unix time) the data deletion job was last updated.
" + }, + "failureReason":{ + "shape":"FailureReason", + "documentation":"If a data deletion job fails, provides the reason why.
" + } + }, + "documentation":"Provides a summary of the properties of a data deletion job. For a complete listing, call the DescribeDataDeletionJob API operation.
" + }, + "DataDeletionJobs":{ + "type":"list", + "member":{"shape":"DataDeletionJobSummary"}, + "max":100 + }, "DataSource":{ "type":"structure", "members":{ "dataLocation":{ "shape":"S3Location", - "documentation":"The path to the Amazon S3 bucket where the data that you want to upload to your dataset is stored. For example:
s3://bucket-name/folder-name/
For dataset import jobs, the path to the Amazon S3 bucket where the data that you want to upload to your dataset is stored. For data deletion jobs, the path to the Amazon S3 bucket that stores the list of records to delete.
For example:
s3://bucket-name/folder-name/fileName.csv
If your CSV files are in a folder in your Amazon S3 bucket and you want your import job or data deletion job to consider multiple files, you can specify the path to the folder. With a data deletion job, Amazon Personalize uses all files in the folder and any sub folder. Use the following syntax with a /
after the folder name:
s3://bucket-name/folder-name/
Describes the data source that contains the data to upload to a dataset.
" + "documentation":"Describes the data source that contains the data to upload to a dataset, or the list of records to delete from Amazon Personalize.
" }, "Dataset":{ "type":"structure", @@ -2917,6 +3088,25 @@ } } }, + "DescribeDataDeletionJobRequest":{ + "type":"structure", + "required":["dataDeletionJobArn"], + "members":{ + "dataDeletionJobArn":{ + "shape":"Arn", + "documentation":"The Amazon Resource Name (ARN) of the data deletion job.
" + } + } + }, + "DescribeDataDeletionJobResponse":{ + "type":"structure", + "members":{ + "dataDeletionJob":{ + "shape":"DataDeletionJob", + "documentation":"Information about the data deletion job, including the status.
The status is one of the following values:
PENDING
IN_PROGRESS
COMPLETED
FAILED
The Amazon Resource Name (ARN) of the dataset group to list data deletion jobs for.
" + }, + "nextToken":{ + "shape":"NextToken", + "documentation":"A token returned from the previous call to ListDataDeletionJobs
for getting the next set of jobs (if they exist).
The maximum number of data deletion jobs to return.
" + } + } + }, + "ListDataDeletionJobsResponse":{ + "type":"structure", + "members":{ + "dataDeletionJobs":{ + "shape":"DataDeletionJobs", + "documentation":"The list of data deletion jobs.
" + }, + "nextToken":{ + "shape":"NextToken", + "documentation":"A token for getting the next set of data deletion jobs (if they exist).
" + } + } + }, "ListDatasetExportJobsRequest":{ "type":"structure", "members":{ diff --git a/tools/code-generation/api-descriptions/redshift-serverless-2021-04-21.normal.json b/tools/code-generation/api-descriptions/redshift-serverless-2021-04-21.normal.json index 8dc9f0454c7..170fcd51856 100644 --- a/tools/code-generation/api-descriptions/redshift-serverless-2021-04-21.normal.json +++ b/tools/code-generation/api-descriptions/redshift-serverless-2021-04-21.normal.json @@ -975,7 +975,7 @@ "members":{ "parameterKey":{ "shape":"ParameterKey", - "documentation":"The key of the parameter. The options are auto_mv
, datestyle
, enable_case_sensitive_identifier
, enable_user_activity_logging
, query_group
, search_path
, require_ssl
, and query monitoring metrics that let you define performance boundaries. For more information about query monitoring rules and available metrics, see Query monitoring metrics for Amazon Redshift Serverless.
The key of the parameter. The options are auto_mv
, datestyle
, enable_case_sensitive_identifier
, enable_user_activity_logging
, query_group
, search_path
, require_ssl
, use_fips_ssl
, and query monitoring metrics that let you define performance boundaries. For more information about query monitoring rules and available metrics, see Query monitoring metrics for Amazon Redshift Serverless.
An array of parameters to set for advanced control over a database. The options are auto_mv
, datestyle
, enable_case_sensitive_identifier
, enable_user_activity_logging
, query_group
, search_path
, require_ssl
, and query monitoring metrics that let you define performance boundaries. For more information about query monitoring rules and available metrics, see Query monitoring metrics for Amazon Redshift Serverless.
An array of parameters to set for advanced control over a database. The options are auto_mv
, datestyle
, enable_case_sensitive_identifier
, enable_user_activity_logging
, query_group
, search_path
, require_ssl
, use_fips_ssl
, and query monitoring metrics that let you define performance boundaries. For more information about query monitoring rules and available metrics, see Query monitoring metrics for Amazon Redshift Serverless.
All of the returned scheduled action objects.
" + "documentation":"All of the returned scheduled action association objects.
" } } }, @@ -2888,6 +2888,20 @@ "documentation":"The schedule of when Amazon Redshift Serverless should run the scheduled action.
", "union":true }, + "ScheduledActionAssociation":{ + "type":"structure", + "members":{ + "namespaceName":{ + "shape":"NamespaceName", + "documentation":"Name of associated Amazon Redshift Serverless namespace.
" + }, + "scheduledActionName":{ + "shape":"ScheduledActionName", + "documentation":"Name of associated scheduled action.
" + } + }, + "documentation":"Contains names of objects associated with a scheduled action.
" + }, "ScheduledActionName":{ "type":"string", "max":60, @@ -2943,7 +2957,7 @@ }, "ScheduledActionsList":{ "type":"list", - "member":{"shape":"ScheduledActionName"} + "member":{"shape":"ScheduledActionAssociation"} }, "SecurityGroupId":{"type":"string"}, "SecurityGroupIdList":{ @@ -3561,7 +3575,7 @@ }, "configParameters":{ "shape":"ConfigParameterList", - "documentation":"An array of parameters to set for advanced control over a database. The options are auto_mv
, datestyle
, enable_case_sensitive_identifier
, enable_user_activity_logging
, query_group
, search_path
, require_ssl
, and query monitoring metrics that let you define performance boundaries. For more information about query monitoring rules and available metrics, see Query monitoring metrics for Amazon Redshift Serverless.
An array of parameters to set for advanced control over a database. The options are auto_mv
, datestyle
, enable_case_sensitive_identifier
, enable_user_activity_logging
, query_group
, search_path
, require_ssl
, use_fips_ssl
, and query monitoring metrics that let you define performance boundaries. For more information about query monitoring rules and available metrics, see Query monitoring metrics for Amazon Redshift Serverless.
An array of parameters to set for advanced control over a database. The options are auto_mv
, datestyle
, enable_case_sensitive_identifier
, enable_user_activity_logging
, query_group
, search_path
, require_ssl
, and query monitoring metrics that let you define performance boundaries. For more information about query monitoring rules and available metrics, see Query monitoring metrics for Amazon Redshift Serverless.
An array of parameters to set for advanced control over a database. The options are auto_mv
, datestyle
, enable_case_sensitive_identifier
, enable_user_activity_logging
, query_group
, search_path
, require_ssl
, use_fips_ssl
, and query monitoring metrics that let you define performance boundaries. For more information about query monitoring rules and available metrics, see Query monitoring metrics for Amazon Redshift Serverless.
A value that specifies whether the workgroup can be accessible from a public network
" + "documentation":"A value that specifies whether the workgroup can be accessible from a public network.
" }, "securityGroupIds":{ "shape":"SecurityGroupIdList",