Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

User support: S3 Storage request for Ceph drive telemetry #198

Closed
yaarith opened this issue Apr 9, 2021 · 10 comments
Closed

User support: S3 Storage request for Ceph drive telemetry #198

yaarith opened this issue Apr 9, 2021 · 10 comments
Assignees
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@yaarith
Copy link

yaarith commented Apr 9, 2021

Questionnaire

  1. User details:
    Your MOC user name and which applications deployed by our team you're planning to use the bucket at:
    [email protected]
    JuyterHub, because the Ceph telemetry dataset uploaded here will be consumed by a jupyter notebook.

  2. Desired bucket name:
    Due to technical limitation we can't promise you a bucket name, instead we will use your preferred name as a prefix to the bucket name:
    ceph-drive-telemetry

  3. Maximal bucket size:
    Planned maximal total bucket size in GiB. Default quota is 2GiB.
    1 TB

  4. Maximal bucket object count:
    Planned maximal count of objects in your bucket. Default quota is 10000.
    Default (10K) is fine.

  5. Your GPG key or contact:
    8B123ADF2E14A94C074359ADA57D798AED1AD14D

@HumairAK
Copy link
Member

HumairAK commented Apr 9, 2021

@yaarith -- will you be planning to access this bucket externally as well?

@yaarith
Copy link
Author

yaarith commented Apr 9, 2021

@yaarith -- will you be planning to access this bucket externally as well?

@HumairAK yes

@chauhankaranraj
Copy link
Member

@HumairAK the goal here is to open source the usage data that the ceph team has been collecting, similar to how Backblaze publishes its hard disk data every quarter. So I believe yes, this data should be publicly accessible.

If this is not the right place to store such data, then please let us know where we should be storing it instead :)
I'm aware of the storage available via open-infrastructure-labs/ops-issues#33 - would this be a better place for this dataset?

Related issue: aicoe-aiops/ceph_drive_failure/issues/29

@tumido
Copy link
Member

tumido commented Apr 12, 2021

@chauhankaranraj The in-cluster S3-compatible storage (Rook/Ceph/Nooba aka OpenShift Container Storage) is not intended as a persistent resilient rock solid storage facility. If the cluster goes down, the data will be lost. This is intended more as an operational storage for faster access to data which are being worked with.

Currently OCS doesn't support multiple set of credentials per bucket neither there's a way to declaratively enforce policies on buckets allowing for public access. (You can workaround it though, but in general it's not supported).

If you want a long term, resilient and mature storage with granular access control, open-infrastructure-labs/ops-issues#33 would be the right choice.

@chauhankaranraj
Copy link
Member

@chauhankaranraj The in-cluster S3-compatible storage (Rook/Ceph/Nooba aka OpenShift Container Storage) is not intended as a persistent resilient rock solid storage facility. If the cluster goes down, the data will be lost. This is intended more as an operational storage for faster access to data which are being worked with.

Currently OCS doesn't support multiple set of credentials per bucket neither there's a way to declaratively enforce policies on buckets allowing for public access. (You can workaround it though, but in general it's not supported).

Makes sense, thanks for the clarification!

If you want a long term, resilient and mature storage with granular access control, open-infrastructure-labs/ops-issues#33 would be the right choice.

Yes, I believe that's pretty much what we want in this case (@yaarith, pls correct me if I'm wrong). I'll request storage there instead :)

@HumairAK
Copy link
Member

great, once @yaarith confirms, I'll close the other pr.

@yaarith
Copy link
Author

yaarith commented Apr 12, 2021

Yes, we need persistent storage in our case; will use open-infrastructure-labs/ops-issues#33 instead.
Thanks everyone!

@sesheta
Copy link
Member

sesheta commented Oct 14, 2021

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@sesheta sesheta added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 14, 2021
@HumairAK
Copy link
Member

/close

@sesheta
Copy link
Member

sesheta commented Oct 14, 2021

@HumairAK: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@sesheta sesheta closed this as completed Oct 14, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants