Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sdks/python: enable recursive deletion for GCSFileSystem Paths #33611

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

mohamedawnallah
Copy link
Contributor

@mohamedawnallah mohamedawnallah commented Jan 15, 2025

Description

In this PR, we enable recursive deletion for Google Cloud Storage (GCS) paths, including directories and blobs. The delete method is updated to remove all blobs under a directory (prefix) when deleting GCS directories.

Additionally, we update the delete test to verify the recursive deletion of directories containing multiple files.

Changes include:

  • Enhanced delete method to support recursive directory deletion.
  • Added tests for recursive deletion of directories and files.

Fixes #27605

Current Behavior

Before After
before after

Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:

  • Mention the appropriate issue in your description (for example: addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment fixes #<ISSUE NUMBER> instead.
  • Update CHANGES.md with noteworthy changes.
  • If this contribution is large, please file an Apache Individual Contributor License Agreement.

See the Contributor Guide for more tips on how to make review process smoother.

To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md

GitHub Actions Tests Status (on master branch)

Build python source distribution and wheels
Python tests
Java tests
Go tests

See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.

Copy link
Contributor

Checks are failing. Will not request review until checks are succeeding. If you'd like to override that behavior, comment assign set of reviewers

In this commit, we enable recursive deletion for
GCS (Google Cloud Storage) paths, including directories
and blobs.

Changes include:
- Updated the `delete` method to support recursive deletion of GCS
  directories (prefixes).
- If the path points to a directory, all blobs under that prefix are
  deleted.
- Refactored logic to handle both single blob and directory deletion
  cases.
In this commit, we update the delete test to verify
recursive deletion of directories (prefixes) in GCS.

Changes include:
- Added test for deleting a GCS directory (prefix) with multiple files.
- Verified files under a directory are deleted recursively when using the delete method.
@mohamedawnallah mohamedawnallah force-pushed the enableRecursiveDeleteGCS branch from 1ea7e3b to 01a4b65 Compare January 16, 2025 16:09
Copy link
Contributor

Assigning reviewers. If you would like to opt out of this review, comment assign to next reviewer:

R: @tvalentyn for label python.
R: @shunping for label io.

Available commands:

  • stop reviewer notifications - opt out of the automated review tooling
  • remind me after tests pass - tag the comment author after tests pass
  • waiting on author - shift the attention set back to the author (any comment or push by the author will return the attention set to the reviewers)

The PR bot will only process comments in the main thread (not review comments).

@mohamedawnallah
Copy link
Contributor Author

Hi @liferoad, It seems the test cases passed now. I would love to receive any feedback on that PR! 🙏

@shunping
Copy link
Contributor

Ack. Thanks for contributing to Beam. I will review the code today.

@mohamedawnallah
Copy link
Contributor Author

Hi @shunping, I understand you've been busy last week. Any updates on this PR review? 🙏


# Check if the blob is a directory (prefix) by listing objects
# under that prefix.
blobs = list(bucket.list_blobs(prefix=blob_name))
Copy link
Contributor

@shunping shunping Jan 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am afraid this line could potentially impact the performance of existing pipelines. Particularly, we now have an extra HTTP request to GCS to list blobs in a bucket no matter what. If an existing pipeline is using gcsio.delete() to delete a directory with a large number of files, then the change will double the HTTP requests to GCS, which may lead to a request quota exceeded error.

I think a safer approach is to add a new api in gcsio particularly for this function, then change delete() in gcsfilesystem.py to use this api. Notice that the original issue #27605 is related to the behavior of delete() under gcsfilesystem.py. We can then recommend users to use this api while deleting a directory.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If an existing pipeline is using gcsio.delete() to delete a directory with a large number of files, then the change will double the HTTP requests to GCS, which may lead to a request quota exceeded error.

Makes sense 👍

Notice that the original issue #27605 is related to the behavior of delete() under gcsfilesystem.py.

Ooops missed that. 🙏

I think a safer approach is to add a new api in gcsio particularly for this function, then change delete() in gcsfilesystem.py to use this api. We can then recommend users to use this api while deleting a directory.

How about adding an optional parameter, such as recursive, with a default value of false to ensure backward compatibility and avoid any performance impact on existing pipelines using the delete function in gcsfilesystem.py?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about adding an optional parameter, such as recursive, with a default value of false to ensure backward compatibility and avoid any performance impact on existing pipelines using the delete function in gcsfilesystem.py?

@shunping What do you think about this comment?

Copy link
Contributor

@shunping shunping Jan 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding the recursive parameter with default as false sounds good to me as it extends the functionality of gcsio.delete() without breaking compatibility.

To fix #27605, however, you will also need to make changes to gcsfilesystem.py to leverage the new functionality.

For example, you can do something similar to https://github.com/apache/beam/pull/29477/files in gcsfilesystem.delete()

  for path in paths:
      if path.endswith('/'):
        # This is a directory. Remove all of its contents including
        # objects and subdirectories.
        self._gcsIO().delete(path, recursive=True)
        return
      else:
         ...

@shunping
Copy link
Contributor

Hi @shunping, I understand you've been busy last week. Any updates on this PR review? 🙏

Sorry for the late reply, @mohamedawnallah. I left a comment about potential performance impact with the new change.
Since gcsio has been used heavily in Beam, we have to be very careful about any code change to its public api.

Copy link
Contributor

Reminder, please take a look at this pr: @tvalentyn @shunping

Copy link
Contributor

@shunping shunping left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please make the changes and I am happy to review them again when ready.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: Python GCSFileSystem.delete does not recursively delete
2 participants