Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Improve error messages when the multipart upload part limit is reache…
…d. (#5406) [SC-54602](https://app.shortcut.com/tiledb-inc/story/54602/consolidation-failed-on-s3-due-to-excessive-multipart-count) If an S3 multipart upload gets too large, it can exceed the limit of 10000 parts, and it will fail with the confusing message of `Failed to upload part of S3 object 's3://my-bucket/my-object[Error Type: 100] [HTTP Response Code: 400] [Exception: InvalidArgument] […] : Unable to parse ExceptionName: InvalidArgument Message: Part number must be an integer between 1 and 10000, inclusive`, that does not clearly indicate to the user how to resolve it. With this PR, if an error happened while uploading a part, and the part size is greater than 10000, a message is appended suggesting that the error might be resolved by increasing the `vfs.s3.multipart_part_size` config option. The logic is robust against a potential increase of the limit by S3, since we don't cause the error, but enhance an already caused one. We also do the same for Azure, whose limit is five times higher than S3. GCS' compose model does not place any limits on the number of parts of an object. --- TYPE: IMPROVEMENT DESC: Improved error messages when the S3 or Azure multipart upload part size is too low, and suggest improving it.
- Loading branch information