Replies: 1 comment 4 replies
-
Hey @prat0088 - Thank you for posting what seems to be our first Github Discussion ever! There are 2 endpoints for uploading objects through lakeFS:
lakeFS uncommitted data is always staged by default. We currently don't support unstaged/untracked objects. It means that every commit to a branch will always apply and commit all uncommitted objects in that branch (like If you configured everything correctly, got a successful response and can't see the object in the UI, something isn't right. Please attach the logs from lakeFS (with verbosity set to |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
I'm trying to figure out the best way to add 1,000s to 1,000s of 1-50MB objects via API. At the moment I can't figure out how to make anything work. The documentation seems to focus on lakectl. I read through the swagger doc and tried posting an object. The command succeeds but I can't see the result in the UI. I suspect I might want to write to lakefs using the s3 api to write the object, but then how would I stage that object for a commit? I think I'm probably missing some link/stage step.
I don't need to import the data or make it accessible outside of lakeFS. I want to keep it simple and easy and let lakeFS manage my data.
Can anyone explain what endpoints in the API I need to use to write 1-50MB objects to a branch, commit, and merge?
Thanks.
Beta Was this translation helpful? Give feedback.
All reactions