-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
User support: S3 Storage request for Ceph drive telemetry #198
Comments
@yaarith -- will you be planning to access this bucket externally as well? |
@HumairAK the goal here is to open source the usage data that the ceph team has been collecting, similar to how Backblaze publishes its hard disk data every quarter. So I believe yes, this data should be publicly accessible. If this is not the right place to store such data, then please let us know where we should be storing it instead :) Related issue: aicoe-aiops/ceph_drive_failure/issues/29 |
@chauhankaranraj The in-cluster S3-compatible storage (Rook/Ceph/Nooba aka OpenShift Container Storage) is not intended as a persistent resilient rock solid storage facility. If the cluster goes down, the data will be lost. This is intended more as an operational storage for faster access to data which are being worked with. Currently OCS doesn't support multiple set of credentials per bucket neither there's a way to declaratively enforce policies on buckets allowing for public access. (You can workaround it though, but in general it's not supported). If you want a long term, resilient and mature storage with granular access control, open-infrastructure-labs/ops-issues#33 would be the right choice. |
Makes sense, thanks for the clarification!
Yes, I believe that's pretty much what we want in this case (@yaarith, pls correct me if I'm wrong). I'll request storage there instead :) |
great, once @yaarith confirms, I'll close the other pr. |
Yes, we need persistent storage in our case; will use open-infrastructure-labs/ops-issues#33 instead. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with /lifecycle stale |
/close |
@HumairAK: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Questionnaire
User details:
Your MOC user name and which applications deployed by our team you're planning to use the bucket at:
[email protected]
JuyterHub, because the Ceph telemetry dataset uploaded here will be consumed by a jupyter notebook.
Desired bucket name:
Due to technical limitation we can't promise you a bucket name, instead we will use your preferred name as a prefix to the bucket name:
ceph-drive-telemetry
Maximal bucket size:
Planned maximal total bucket size in GiB. Default quota is 2GiB.
1 TB
Maximal bucket object count:
Planned maximal count of objects in your bucket. Default quota is 10000.
Default (10K) is fine.
Your GPG key or contact:
8B123ADF2E14A94C074359ADA57D798AED1AD14D
The text was updated successfully, but these errors were encountered: