Skip to content

Commit

Permalink
Merge branch 'w/2.8/improvement/ZENKO-4941' into tmp/octopus/w/2.9/im…
Browse files Browse the repository at this point in the history
…provement/ZENKO-4941
  • Loading branch information
bert-e committed Dec 4, 2024
2 parents 8797156 + 9209e0d commit 3134a99
Show file tree
Hide file tree
Showing 7 changed files with 218 additions and 41 deletions.
3 changes: 3 additions & 0 deletions .github/scripts/end2end/configs/zenko.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -123,6 +123,9 @@ spec:
azure:
archiveTier: "hot"
restoreTimeout: "15s"
scuba:
logging:
logLevel: debug
ingress:
workloadPlaneClass: 'nginx'
controlPlaneClass: 'nginx-control-plane'
Expand Down
10 changes: 8 additions & 2 deletions tests/ctst/common/hooks.ts
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ import {
process.env.NODE_TLS_REJECT_UNAUTHORIZED = '0';

const { atMostOnePicklePerTag } = parallelCanAssignHelpers;
const noParallelRun = atMostOnePicklePerTag(['@AfterAll', '@PRA', '@ColdStorage']);
const noParallelRun = atMostOnePicklePerTag(['@AfterAll', '@PRA', '@ColdStorage', '@Utilization']);

setParallelCanAssign(noParallelRun);

Expand Down Expand Up @@ -55,7 +55,13 @@ After(async function (this: Zenko, results) {
);
});

After({ tags: '@Quotas' }, async function () {
After({ tags: '@Quotas' }, async function (this: Zenko, results) {
if (results.result?.status === 'FAILED') {
this.logger.warn('quota was not cleaned for test', {
bucket: this.getSaved<string>('bucketName'),
});
return;
}
await teardownQuotaScenarios(this as Zenko);
});

Expand Down
109 changes: 106 additions & 3 deletions tests/ctst/features/quotas/Quotas.feature
Original file line number Diff line number Diff line change
Expand Up @@ -72,11 +72,9 @@ Feature: Quota Management for APIs
Given an action "DeleteObject"
And a permission to perform the "PutObject" action
And a STORAGE_MANAGER type
And a bucket quota set to 10000 B
And an account quota set to 10000 B
And an upload size of 1000 B for the object "obj-1"
And a bucket quota set to <bucketQuota> B
And an account quota set to <accountQuota> B
And an upload size of 200 B for the object "obj-1"
And a <userType> type
And an environment setup for the API
And an "existing" IAM Policy that "applies" with "ALLOW" effect for the current API
Expand All @@ -97,6 +95,80 @@ Feature: Quota Management for APIs
| 100 | 0 | 200 | IAM_USER |
| 100 | 200 | 200 | IAM_USER |

@2.6.0
@PreMerge
@Quotas
@CronJob
@DataDeletion
@NonVersioned
Scenario Outline: Quotas are affected by deletion operations between count items runs
Given an action "DeleteObject"
And a permission to perform the "PutObject" action
And a STORAGE_MANAGER type
And a bucket quota set to 1000 B
And an account quota set to 1000 B
And an upload size of 1000 B for the object "obj-1"
And a bucket quota set to <bucketQuota> B
And an account quota set to <accountQuota> B
And a <userType> type
And an environment setup for the API
And an "existing" IAM Policy that "applies" with "ALLOW" effect for the current API
When I wait 3 seconds
And I PUT an object with size <uploadSize>
Then the API should "fail" with "QuotaExceeded"
When the "count-items" cronjobs completes without error
# Wait for inflights to be read by SCUBA
When I wait 3 seconds
# At this point if negative inflights are not supported, write should
# not be possible, as the previous inflights are now part of the current
# metrics.
And i delete object "obj-1"
# Wait for inflights to be read by SCUBA
And I wait 3 seconds
And I PUT an object with size <uploadSize>
Then the API should "succeed" with ""

Examples:
| uploadSize | bucketQuota | accountQuota | userType |
| 100 | 200 | 0 | ACCOUNT |

@2.6.0
@PreMerge
@Quotas
@CronJob
@DataDeletion
@NonVersioned
Scenario Outline: Negative inflights do not allow to bypass the quota
Given an action "DeleteObject"
And a permission to perform the "PutObject" action
And a STORAGE_MANAGER type
And a bucket quota set to 1000 B
And an account quota set to 1000 B
And an upload size of 1000 B for the object "obj-1"
And a bucket quota set to <bucketQuota> B
And an account quota set to <accountQuota> B
And a <userType> type
And an environment setup for the API
And an "existing" IAM Policy that "applies" with "ALLOW" effect for the current API
When I wait 3 seconds
And I PUT an object with size <uploadSize>
Then the API should "fail" with "QuotaExceeded"
When the "count-items" cronjobs completes without error
# Wait for inflights to be read by SCUBA
When I wait 3 seconds
# At this point if negative inflights are not supported, write should
# not be possible, as the previous inflights are now part of the current
# metrics.
And i delete object "obj-1"
# Wait for inflights to be read by SCUBA
And I wait 3 seconds
And I PUT an object with size 1000
Then the API should "fail" with "QuotaExceeded"

Examples:
| uploadSize | bucketQuota | accountQuota | userType |
| 200 | 200 | 0 | ACCOUNT |

@2.6.0
@PreMerge
@Quotas
Expand Down Expand Up @@ -129,3 +201,34 @@ Feature: Quota Management for APIs
| RestoreObject | 100 | 0 | 99 | IAM_USER | fail | QuotaExceeded | 3 |
| RestoreObject | 100 | 99 | 99 | IAM_USER | fail | QuotaExceeded | 3 |
| RestoreObject | 100 | 101 | 101 | IAM_USER | succeed | | 3 |

@2.6.0
@PreMerge
@Quotas
@Restore
@Dmf
@ColdStorage
@Only
Scenario Outline: Restored object expiration updates quotas
Given an action "<action>"
And a STORAGE_MANAGER type
And a transition workflow to "e2e-cold" location
And an upload size of <uploadSize> B for the object "obj-1"
Then object "obj-1" should be "transitioned" and have the storage class "e2e-cold"
Given a bucket quota set to <bucketQuota> B
And an account quota set to <accountQuota> B
And a <userType> type
And an environment setup for the API
And an "existing" IAM Policy that "applies" with "ALLOW" effect for the current API
When i restore object "" for 5 days
Then the API should "succeed" with ""
And object "obj-1" should be "restored" and have the storage class "e2e-cold"
Given a STORAGE_MANAGER type
Then object "obj-1" should expire in 5 days
When i wait for 5 days
Then object "obj-1" should be "cold" and have the storage class "e2e-cold"

Examples:
| action | uploadSize | bucketQuota | accountQuota | userType |
| RestoreObject | 100 | 0 | 0 | ACCOUNT |
| RestoreObject | 100 | 101 | 101 | ACCOUNT |
14 changes: 14 additions & 0 deletions tests/ctst/steps/quotas/quotas.ts
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ import { Scality, Command, Utils, AWSCredentials, Constants, Identity, IdentityE
import { createJobAndWaitForCompletion } from '../utils/kubernetes';
import { createBucketWithConfiguration, putObject } from '../utils/utils';
import { hashStringAndKeepFirst20Characters } from 'common/utils';
import assert from 'assert';

export async function prepareQuotaScenarios(world: Zenko, scenarioConfiguration: ITestCaseHookParameter) {
/**
Expand Down Expand Up @@ -136,6 +137,16 @@ Given('a bucket quota set to {int} B', async function (this: Zenko, quota: numbe
result,
});

// Ensure the quota is set
const resultGet: Command = await Scality.getBucketQuota(
this.parameters,
this.getCommandParameters());
this.logger.debug('GetBucketQuota result', {
resultGet,
});

assert(resultGet.stdout.includes(`${quota}`));

if (result.err) {
throw new Error(result.err);
}
Expand All @@ -158,6 +169,9 @@ Given('an account quota set to {int} B', async function (this: Zenko, quota: num
result,
});

// Ensure the quota is set
assert(JSON.parse(result.stdout).quota === quota);

if (result.err) {
throw new Error(result.err);
}
Expand Down
69 changes: 53 additions & 16 deletions tests/ctst/steps/utils/kubernetes.ts
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
import fs from 'fs';
import * as path from 'path';
import lockFile from 'proper-lockfile';
import { KubernetesHelper, Utils } from 'cli-testing';
import Zenko from 'world/Zenko';
import {
Expand Down Expand Up @@ -71,12 +74,39 @@ export function createKubeCustomObjectClient(world: Zenko): CustomObjectsApi {
return KubernetesHelper.customObject;
}

export async function createJobAndWaitForCompletion(world: Zenko, jobName: string, customMetadata?: string) {
export async function createJobAndWaitForCompletion(
world: Zenko,
jobName: string,
customMetadata?: string
) {
const batchClient = createKubeBatchClient(world);
const watchClient = createKubeWatchClient(world);

const lockFilePath = path.join('/tmp', `${jobName}.lock`);
let releaseLock: (() => Promise<void>) | false = false;

if (!fs.existsSync(lockFilePath)) {
fs.writeFileSync(lockFilePath, '');
}

try {
releaseLock = await lockFile.lock(lockFilePath, {
// Expect the jobs in the queue does not take more than 5 minutes to complete
stale: 10 * 60 * 1000,
// use a linear backoff strategy
retries: {
retries: 610,
factor: 1,
minTimeout: 1000,
maxTimeout: 1000,
},
});
world.logger.debug(`Acquired lock for job: ${jobName}`);

// Read the cron job and prepare the job spec
const cronJob = await batchClient.readNamespacedCronJob(jobName, 'default');
const cronJobSpec = cronJob.body.spec?.jobTemplate.spec;

const job = new V1Job();
const metadata = new V1ObjectMeta();
job.apiVersion = 'batch/v1';
Expand All @@ -87,50 +117,57 @@ export async function createJobAndWaitForCompletion(world: Zenko, jobName: strin
'cronjob.kubernetes.io/instantiate': 'ctst',
};
if (customMetadata) {
metadata.annotations = {
custom: customMetadata,
};
metadata.annotations.custom = customMetadata;
}
job.metadata = metadata;

// Create the job
const response = await batchClient.createNamespacedJob('default', job);
world.logger.debug('job created', {
job: response.body.metadata,
});
world.logger.debug('Job created', { job: response.body.metadata });

const expectedJobName = response.body.metadata?.name;

// Watch for job completion
await new Promise<void>((resolve, reject) => {
void watchClient.watch(
'/apis/batch/v1/namespaces/default/jobs',
{},
(type: string, apiObj, watchObj) => {
if (job.metadata?.name && expectedJobName &&
(watchObj.object?.metadata?.name as string)?.startsWith?.(expectedJobName)) {
if (
expectedJobName &&
(watchObj.object?.metadata?.name as string)?.startsWith?.(expectedJobName)
) {
if (watchObj.object?.status?.succeeded) {
world.logger.debug('job succeeded', {
job: job.metadata,
});
world.logger.debug('Job succeeded', { job: job.metadata });
resolve();
} else if (watchObj.object?.status?.failed) {
world.logger.debug('job failed', {
world.logger.debug('Job failed', {
job: job.metadata,
object: watchObj.object,
});
reject(new Error('job failed'));
reject(new Error('Job failed'));
}
}
}, reject);
},
reject
);
});
} catch (err: unknown) {
world.logger.error('error creating job', {
world.logger.error('Error creating or waiting for job completion', {
jobName,
err,
});
throw err;
} finally {
// Ensure the lock is released
if (releaseLock) {
await releaseLock();
world.logger.debug(`Released lock for job: ${jobName}`);
}
}
}


export async function waitForZenkoToStabilize(
world: Zenko, needsReconciliation = false, timeout = 15 * 60 * 1000, namespace = 'default') {
// ZKOP pulls the overlay configuration from Pensieve every 5 seconds
Expand Down
Loading

0 comments on commit 3134a99

Please sign in to comment.