Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Please put feed back for the Python version on this thread #38

Open
Xerbo opened this issue Feb 13, 2020 · 16 comments
Open

Please put feed back for the Python version on this thread #38

Xerbo opened this issue Feb 13, 2020 · 16 comments
Assignees
Labels
enhancement python Support for the python version

Comments

@Xerbo
Copy link
Owner

Xerbo commented Feb 13, 2020

It's finally happening, when finished all current code will be moved to a new branch and the python version will occupy the master branch.

All existing functionality will be migrated, this would also allow native windows support and easier development.

@Xerbo Xerbo self-assigned this Feb 13, 2020
@DallasWhite
Copy link

Would this be why the current script isn't working? The last time I successfully used it was before this announcement, but when I went to use it today, I couldn't get it working.

@Xerbo
Copy link
Owner Author

Xerbo commented Feb 27, 2020

This doesn't seem to be a problem with the script but rather FurAffinity, when quiet mode is turned off in wget it exits with 2020-02-27 22:09:03 ERROR 503: Service Temporarily Unavailable.

Which most likely means that FurAffinity have blacklisted the user-agent of this script or it's failing some sort of CloudFlare anti-bot test.

@Manni1000
Copy link

when can i downlode the python version?

@Xerbo Xerbo changed the title Python rewrite in progress Please put feed back for the Python version on this thread Apr 6, 2020
@Xerbo Xerbo added enhancement python Support for the python version labels Apr 6, 2020
@serveral1
Copy link

any plans on adding the -n function (or something better) back? i just found it very useful whenever i needed to update someone's gallery based just on their latest submissions.

@Xerbo
Copy link
Owner Author

Xerbo commented Jul 26, 2020

Would this be a good enough replacement?

-r replenish, keep downloading until it finds a already downloaded file

@reederda
Copy link

When downloading favorites, the python tool indicates "Downloading page " when continuing to the next page, but appears to fail and instead just cycles through the first page repeatedly.

@Xerbo
Copy link
Owner Author

Xerbo commented Aug 14, 2020

@jkmartindale kindly fixed this in #43

@felikcat
Copy link

-a attempts, how many connection retry attempts before exiting; -1 for unlimited, ? is default.
-t timeout, wait this long in seconds before another connection retry attempt; ? is default.

@Xial
Copy link

Xial commented Sep 21, 2020

I would love a way to insert a pause between downloads, something like five to fifteen seconds, to ease up on how much I'd otherwise be hitting them.

By default, it just feels like it hits the server a bit too fast overall, especially as I would like to get back to the long term local archiving habit I have had, and don't want them to just block me for getting caught up on going through my archives.

Also, is it possible to also place the created JSON files in a subfolder, or otherwise not keep them once downloading is finished?

@Xerbo
Copy link
Owner Author

Xerbo commented Sep 21, 2020

I've downloaded some large (>2000 submissions) galleries and have had no problems with rate limiting yet, would still be a good idea to add a delay though. As for putting the meta files in a different directory, good idea, would really clean up the output folder.

I'll get on this tomorrow, should be pretty easy to do.

@Xerbo
Copy link
Owner Author

Xerbo commented Sep 23, 2020

@Xial done, see 0f0fe3e and 85cd3cd.

@Xial
Copy link

Xial commented Sep 23, 2020

Looking forward to giving those a go later today. Thank you! :)

@Xial
Copy link

Xial commented Oct 5, 2020

Would this be a good enough replacement?

-r replenish, keep downloading until it finds a already downloaded file

Perhaps an option to only process a specific number of pages would be appropriate. Occasionally, I might have saved one or two images by hand from the recent stuff, but then notice that the artist just sprayed 30 or 40 pictures up all at once and realize it'd be better to just automate the process.

It could also mitigate things like this when refreshing a gallery:

...
Skipping "Bea", since it's already downloaded
Downloading page 10
Skipping "Golden Birb", since it's already downloaded
...

:)

@ponchojohn1234
Copy link

i'm wondering if it would be posible to download a specific folder from a user instead of their entire gallery, i know an old browser extension was able to do it but it doesn't work with the new theme

@Xial
Copy link

Xial commented Apr 1, 2021

I noticed there's a --start command that allows users to pick a page number to start from.
Could there be a --stop-at command, so that once the page count goes beyond a certain point, the script stops downloading?

As example, there's one user who has uploaded lots of art over the years, and having the script go through 30+ pages to announce that it's skipping because file's already present makes me feel like a bad citizen. However, downloading 60 images by hand for one artist is a little tedious.

Thanks. :)

@Radiquum
Copy link

Radiquum commented Jun 16, 2022

I just want to say: Thank you for this tool, it's so much easier than doing it manually for over 3k+ various images :3

and yes, it works on:
arch linux and android (termux) with python 3.10.x

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement python Support for the python version
Projects
None yet
Development

No branches or pull requests

9 participants